Here’s Every AI Feature Google’s Tensor Chip Enables in the Pixel 6 Series


With the new Pixel 6 and Pixel 6 Pro from Google, people will now be able to explore a lot of new kinds of opportunities that may still sound like magic. Yes, that’s right. The Tensor Chip inside the Pixel 6 Series phones focused on being a flagship SoC and offering features that elevate users’ experience. Tensor Chip is built to focus on four key areas- Speech, Language, Imaging and Video. While Tensor improves imaging and videography in some ways, the improvements it makes in terms of language and speech are just way advanced and well thought that would be most helpful for many, in many ways. Here are some of the new AI features powered by Google’s Tensor Chip.


HDRNet on Tensor Chip at 4K 60fps Pixel 6
HDRNet on Tensor Chip at 4K 60fps Pixel 6

While the Google Pixel phones are known for their awesome looking still image quality, applying the same tricks on video was difficult. Videos are nothing but a colossal series of photos being taken at a breakneck pace. Applying the tricks used on images won’t help there, so Google brought HDRNet, powered by Tensor, which combines with Tone Mapping to provide that signature Pixel look in videos. Now HDRNet can be run at 4K 60fps, ensuring much better colour accuracy, dynamic range and details to improve the video quality, assisted by the new camera lenses.

Magic Eraser

Magic Eraser is something that many people may have dreamt of having in a photo editing app. Magic Eraser uses Machine Learning powered by the Tensor Chip to suggest what distractions they want to remove. It figures out the subject automatically and marks unwanted objects that could be distracting. And this feature isn’t just limited to the automated task of removing things, but users can manually remove distractions as well, and it doesn’t matter if it’s a photo they took a while ago, or it’s an old photo from a few years back.

Face Unblur

Sometimes people want to take a picture where the lighting isn’t great, and the subject is in motion. Tensor’s on-device Machine Learning solves the issue in a very different way combining itself with the hardware. When a user even clicks, the Pixel Camera starts using FaceSSD to figure out faces in the scene. If they’re blurry, it turns on a second camera. So when the user takes a photo, Pixel 6 captures two photos simultaneously, one from the ultra-wide camera and one from the primary camera. The image from the primary camera has a regular exposure resulting in reduced noise, while the photo from the ultra-wide camera has a lesser amount of blur due to faster exposure. The on-device Tensor Chip fuses the sharper face captured by the ultra-wide camera with the low-noise shot taken by the primary camera to bring up the best result.

Motion Mode

Machine Learning assisted by Tensor Chip can produce results like this
Machine Learning assisted by Tensor Chip can produce results like this

It’s pretty hard to capture any action or fast-moving subjects with a realistic-looking motion blur. Motion Mode is there on Pixel 6 devices for fast-moving subjects, which captures a set of photos and combines them using Machine Learning. It identifies the subject, aligns multiple frames, determines motion vectors, and interpolates intermediate frames to bring up the motion blur while the subject looks sharp.

Real Tone

Cameras indeed have light skin bias that’s crawled into modern photography as well. Google believes that it results from imaging products not being tested with diverse enough groups of people. To solve this issue, Google had to improve its face detection models. Google has enhanced auto-white balance to tune the skin tone so that the images look natural. In contrast, auto-exposure is also adjusted to make sure that a person’s skin doesn’t look brighter or darker in an unnatural way. All these are powered by Machine Learning, powered by the Tensor Chip, which helps reduce stray light in portraits and makes Night Sight portraits less blurry. If you’re willing to know more about Real Tone, you should consider reading this blog post by Google.

Natural Language Processing

Pixel 6 Series phones, powered by Tensor Chip, don’t just have better photography capabilities, but the Tensor Chip helps people do things more efficiently. Voice typing is now much improved, and Pixel phones now understand the context of a personal conversation, can adjust punctuations. Due to the Tensor Chip, Google Pixel 6 phones, everything is now done using half as much power as previous cases.

Contextual Voice Typing

I already mentioned that Pixel phones now can understand the context of a conversation. If you have two friends with similar names, but they write their names differently, Pixel already knows which spelling to use in which situation. That’s not all; users can now insert emojis without scrolling through pages. Users can insert emojis by just talking to Pixel.

Improved Calling Screen

Automated Messages can now be live transcribed
Automated Messages can now be live transcribed

Calling businesses and enterprises is sometimes painful, but Pixel 6, powered by Tensor Chip, reduces hassle. Even before a user makes a call, they are notified of the best time to call to avoid waiting. When the call is connected and an automated system is there to send instructions, Pixel 6 listens to everything and makes it easier for users to select any options they like by showing the information on the screen. Also, when a call is placed on hold, the phone does the job of listening, and when an actual human responds, it lets the user know that now they need to talk. There are some other sick tricks out there on the Pixel 6 phones as well.

Faster Translation with Live Translate

Live Translation on Pixel 6 powered by Tensor Chip
Live Translation on Pixel 6 powered by Tensor Chip

Live Translation is a groundbreaking feature that is very useful for those who need to communicate with others who speak a different language. Now using a Pixel 6 phone, users can get messages in, let’s say, Japanese, but it will automatically be translated on the device to show up in English. Now, users can type or use transcription to reply in Japanese while writing the message in English. It is now possible to translate the audio from any source, a video on YouTube or reels on Instagram. Live Translation will roll out to more instant messaging apps later. Using Pixel Camera, users can translate anything, signs, documents and more in their native language. All these are powered by Tensor Chip and are processed on the device without connecting to any service online.

Interpreter Mode

Interpreter Mode is the most helpful for two or more people who don’t speak the same language. It can let users talk in different languages while everything gets translated instantly and played back via the speaker.

These are just some of the features, explained briefly. However, Tensor Chip will ensure faster Machine Learning and process many tasks in a more effortless way. What’s your thought on the new Tensor Chip by Google? Do let us know via your comments below.