Google Tensor Chip: Why Did Google Ditch Qualcomm For its Own AI SoC In Upcoming Pixel 6 Smartphones?

Essentially, Google knows what kind of power is required to process AI algorithms.

When Google launched the Pixel phones back in 2016, many saw this as the company’s effort to set new standards for what an Android phone should be. Google may have been able to change the standards for what phone cameras can do, but the company surely hasn’t changed smartphones…yet. Enter, Google Tensor, a chip that’s manufactured by Google and will replace Qualcomm SoCs that have powered Pixel phones for the last five years. What do we know about the Tensor chips so far? Not much, but we can make some educated guesses.

What is Google Tensor?

As mentioned above, Google Tensor is a system-on-chip, which means it will be the brains of the Pixel 6 devices. It’s an octa-core chipset built on Samsung’s 5 nanometer fabrication process using processor architectures from ARM. Being the developer of Android, this gives Google an undue advantage of sorts on the smartphone space, since the company can tailor Android to make the new Tensor chip perform better than others. But why is a new chip different?

Here’s why…

You might argue that Google has no background in hardware, and the company’s history with Pixel phones so far doesn’t really give us confidence, except that while Google is new to building new phones, it’s no stranger to building new chips. The Google Tensor is named after something you may have heard of before — the Tensor Processing Unit (TPU), a chip that is quite popular in the data center space.

Here’s how Google defines TPUs:

“Tensor Processing Units (TPUs) are Google’s custom-developed application-specific integrated circuits (ASICs) used to accelerate machine learning (ML) workloads. Cloud TPUs allow you to access TPUs from Compute Engine, Google Kubernetes Engine and AI Platform.”

More than half of the AI and ML algorithms that our phones process today actually happens on the cloud. That’s why the Google Assistant or Alexa show a buffering animation when you give them a command. They send your voice command to the cloud, where a processor that’s significantly more powerful than the one on your phone processes the command and tells the phone what to do.

Google Pixel 6 Pro

Claim number one…

Which brings us to the first claim that Google is making with the new Google Tensor chip — that it will do more AI processing “on-device”.

When you put two and two together, that’s more than believable right now. TPUs were designed to process AI and ML and possibly replace the powerful GPUs that take care of AI processing at data centers today. Here’s an excerpt from a GPU vs TPU comparison done by Bangalore-based IT firm Mphasis:

“The results indicate that speedups (from TPUs) by a factor of more than 15 are possible, but they appear to come at a cost. First, v3-8 TPU’s pricing is 5.5 times greater than for the P100, which by itself sounds alarming; however, calculating the amount of training per dollar yields a monetary savings of more than 64% since the TPU is so much faster.”

Essentially, Google knows what kind of power is required to process AI algorithms. Which means that in many instances, the Tensor chip won’t need to send data to the cloud. It already knows how to process the commands.

Keep a check on your expectations though, because there’s no way that a chip inside the tiny smartphone can match what a data center chip does. Simply put, phones just cannot provide the same thermal headroom and performance required for data center level performance, at least not till we design the chips that ran Iron Man’s AI in Marvel’s Avengers.

A common concern with voice assistants is that because they listen to us all the time so they can react when you say “Ok, Google” or “Alexa”, they may be recording us and sending data to the cloud. If they could do functions on device, they wouldn’t need to record and send data to the cloud in the first place. Of course, Google won’t completely eliminate the need to send data to the cloud right now, but this is a start. Apple also announced, during WWDC earlier this year, that Siri will perform more functions on-device.

This also makes voice assistants faster and more intuitive to use.

Claim number two…

The Tensor chip is also supposed to improve photos, videos, search and captioning on smartphones. Which is another claim that is more than believable.

Why? Because Google has already proved that it can do wonders with camera software. The Pixel phones’ cameras have been better than competing devices with a multitude of cameras ever since they first launched. Now imagine what they could have done if, like the Google Assistant, your camera could also buffer for a second whenever you clicked the shutter button, just so it could send the photo back to the cloud and get it processed there?

The Tensor chip should allow Google to bring at least some of its AI heft into the smartphone, and use that to enhance the photos and videos you click with them. The same works for captioning and search too.

Have you heard of TensorFlow?

‘Tensor’ isn’t a term that Google just throws around. The company announced TPUs in 2018 and the Tensor chip just a few days ago, but before all of that was another — TensorFlow.

This is an open-source machine learning language that Google designed and launched in 2015. Check out the picture below.

This was actually generated by an AI-based platform called DeepDream, which was built by an ex-Google engineer using TensorFlow. Google actually held an exhibition of artwork created by such AI tools back in 2016. TensorFlow is also instrumental in the AI implementations Google does on Search, Gmail and many of its other products.

According to reports by Hacker News and StackOverflow, developers who could build using TensorFlow were in demand over 2020. When TensorFlow turned three, Google launched TPUs and right after it turned five came Google Tensor.

All this simply means that more and more applications are made using TensorFlow, and Tensor chips are already designed to make the best of what this language can achieve. Point Google. Again.

If all that sounds familiar to you, that’s because it’s almost exactly how Apple’s platforms work. Coding for Mac, iOS and iPadOS is preferred on Apple’s Swift programming language, while the company also designs its own chips, like the Bionic and M1.

Thanks for reading till the end of this article. For more such informative and exclusive tech content, like our Facebook page