There’s no doubt that Google is making more chips for phones (and other form factors are rumored). (For example, the Titan M is succeeded by the Titan M2.) Google is not giving Tensor a generation signifier at launch, but the company will presumably append a number on the next version. Compared to previous models on Pixel 4 phones, the new on-device neural machine translation (NMT) model uses less than half the power when running on Google Tensor. Google Tensor also enables Live Translate to work on media like videos using on-device speech and translation models. Meanwhile, there’s Assistant voice typing for editing what you just transcribed in an entirely hands-free manner, and Live Translate, with the Pixel’s translation quality improving by 18%, “a level of improvement that typically takes multiple years of research:” The high-quality ASR (automatic speech recognition) model is used to transcribe voice commands, as well as in long-running applications like Recorder and Live Caption “without quickly draining the battery.” Meanwhile, face detection is more accurate on the Pixel 6 and works faster - due to the integrated subsystems, while consuming half the power compared to a Pixel 5.Īssistant on Tensor is using the “most advanced speech recognition model ever released by Google” at, again, half the power. Action Pan blurs the background, while Long Exposure works on the subject (as seen below). What Tensor can doīesides Live HDR+, which makes colors more accurate and vivid, at 4K60, Tensor allows other computational photography and video features like Motion Mode in Google Camera. We ensured different subsystems inside Tensor work really well together, rather than optimizing individual elements for peak speeds. To get good performance for these complex applications, we made system-level decisions for the SoC. There’s also a 20-core GPU that Google says “delivers a premium gaming experience for the most popular Android games.” It is 370% faster than the Pixel 5, which uses the Adreno 620 GPU.Īs software applications on mobile phones become more complex, they run on multiple parts of the chip. Phil Carmack, VP and GM of Google Silicon A workload that you normally would have done with dual A76s, maxed out, is now barely tapping the gas with dual X1s.” “You might use the two X1s dialed down in frequency so they’re ultra-efficient, but they’re still at a workload that’s pretty heavy. In real terms, it’s 80% faster than the Pixel 5’s Snapdragon 765G. In a normal CPU, the mid cores would handle such tasks, like Google Lens visual analysis, but be “maxed out.” Google says using two X1 cores in that scenario would be more efficient, and that’s what Tensor is optimized for. The dual-X1 approach allows Google to throw more power at workloads that are of medium intensity. They are joined by two “mid” 2.25 GHz A76 CPU cores, with Ars Technica’s Google Silicon interview pointing out how they are based on a 5nm process rather than the 7nm original found in flagship phone chips last year. Four high-efficiency/small A55 cores round out the CPU. The Tensor CPU + GPUĪt the Pixel Launch Event, Google went into Tensor and explicitly touted the inclusion of two high-performance ARM Cortex-X1 cores at 2.8 GHz. This is an area where we’ve been held back for years, but now, we’re able to open a new chapter in AI-driven smartphone innovation. The Tensor chip is specifically designed to offer Google’s latest advances in AI directly on a mobile device. Rather, an entire chip that’s optimized for desired tasks is needed. In an interview with The Verge, Rick Osterloh said that work started in 2017 after coming to the realization that Google couldn’t take a piecemeal approach - like building a single co-processor, e.g., Pixel Visual/Neural Core - to boost AI models.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |