Google has introduced what appears to be a game changer for machine learning workloads. At their Google I/O conference this week, they announced new Cloud TPU Cloud Pods that can complete ML workloads in just minutes. Typically, these workloads would take days or weeks to complete on other systems. Google claims these TPU Pods are well-suited for users with specific compute needs, such as iterating faster while training large ML models, training more accurate models using larger datasets, or retraining a model with new data on a more timely basis. The article that follows explains more about this latest announcement.
Google on Tuesday announced that its Cloud TPU v2 Pods and Cloud TPU v3 Pods are now publicly available in beta, enabling machine learning researchers and engineers to iterate faster and train more capable machine learning models. The pods are composed of racks of Tensor Processing Units (TPUs), the custom silicon chips Google first unveiled in 2016. Together in multi-rack form, they amount to a “scalable supercomputer for machine learning,” Google said.
Google announced the beta release during the Google I/O conference, the annual developer event where Google typically makes several AI-related announcements, including the release of AI products and services aimed at enterprise customers.