Notice: Trying to access array offset on value of type null in /home/gadget/public_html/wp-content/plugins/really-simple-facebook-twitter-share-buttons/really-simple-facebook-twitter-share-buttons.php on line 318
At the latest Google I/O developer conference in Mountain View, the tech giant announced a new tensor processing unit (TPU), a processing accelerator unit used in machine learning that moves Moore’s Law forward with almost seven years or about three chip generations.
Google says the accelerator can deliver the highest performance-per-watt order of magnitude, greater than all the commercially available ones.
The new tensor processing unit is designed specifically for machine learning tasks. In fact, as Norm Jouppi, Google engineer confirms, the accelerator has been running in their data centers for more than a year. They have been using it for machine learning actions that required reduced computational precision like object recognition or deep learning.
The name originated in the accelerator’s main application, the Tensor Flow. Tensor Flow is an open-source library software for computing large math datasets and interpretation with visual graphs. This cool software was first developed during a twelve-month academic program specializing in machine learning, neural networks, linguistics and data visualization called Google’s Brain Team.
According to Sundar Pichai, the company’s CEO, CPUs, and GPUs will never be replaced by tensor processing accelerators, but their capabilities can accelerate machine learning with a fragment of the power required by other ASICs.
The drawback of this development comes, however, from the fact that Google’s TPU is designed for highly-specific workloads; the applications are Cloud Machine Learning Alpha and Tensor Flow, which are used to process numerical datasets and allow tolerance for reduced precision.
Only a handful of GPUs and CPUs (see Haswell and later) support this kind of calculations in Tensor Flow.
Such processing mode is useful because it delivers two times the performance of FP32, having reduced memory usage on a neural network that allows, in time, the development of larger networks.
Since the recent slowing of Moore’s Law, the engineers have attempted to defy atomic-level physics and create nanometer level designs. While many of them will have to wait a few good years, Google’s tensor processing unit has brought 2023 performance to present machine learning.
Again, the drawback is, these processors are application-specific chips for deep learning data or little more than that.
The tensor processing unit won’t be available for corporate or enterprise purchase.
Image source: Venture Beat