Google’s new TPUs are here to accelerate AI training

Posted on 17-05-2017 , by: admin , in , 0 Comments

Google has made another leap forward in the realm of machine learning hardware. The tech giant has begun deploying the second version of its Tensor Processing Unit, a specialized chip meant to accelerate machine learning applications, company CEO Sundar Pichai announced on Wednesday.

The new Cloud TPU sports several improvements over its predecessor. Most notably, it supports training machine learning algorithms in addition to processing the results from existing models. Each chip can provide 180 teraflops of processing for those tasks. Google is also able to network the chips together in sets of what are called TPU Pods that allow even greater computational gains.

Businesses will be able to use the new chips through Google’s Cloud Platform, as part of its Compute Engine infrastructure-as-a-service offering, though the company hasn’t provided exact details on what form those services will take. In addition, the company is launching a new TensorFlow Research Cloud that will provide researchers with free access to that hardware if they pledge to publicly release the results of their research.

It’s a move that has the potential to drastically accelerate machine learning. Google says its latest machine translation model takes a full day to train on 32 of the highest-powered modern GPUs, while an eighth of a TPU Pod can do the same task in an afternoon.