Google sprinkles AI on everything at I/O
As well the new gadgets that were paraded out on the catwalk of Google I/O yesterday, there were a myriad of other updates to Google’s ludicrously wide product set, usually involving AI enhancements.
May 13, 2022
As well the new gadgets that were paraded on the catwalk of Google I/O yesterday there were a myriad of other updates to Google’s ludicrously wide product set, usually involving AI enhancements.
The most headline grabbing things to come out of Google’s announcement extravaganza I/O yesterday was its roster of new gadgets – including the Google Pixel Watch – all of which seem to signal Google’s intentions on becoming more of a player in the devices market and create its own ecosystem of tech.
The problem with announcing 1000 other things at the same as this is that some of it can get lost in the noise – so let’s take a look at the other updates Google either properly announced or waxed lyrical about on stage in California.
Google Translate
24 new languages have been added to Google translate, including the first indigenous language of the Americas. This brings the total number of languages the software can translate between up to 133. These new languages were added using something called Zero-Shot Machine Translation, in which a machine learning model only sees monolingual text – which apparently means it learns to translate into another language without ever seeing an example. So that’s clever isn’t it?
Google Maps
There’s a new feature in Maps called ‘immersive view’ – which is created by fusing billions of aerial and street images to create a high quality ‘representation’ of a place, thanks so some advances in 3D mapping and machine learning. It will start out in Los Angeles, London, New York, San Francisco and Tokyo.
It looks a bit like a video game version of a city is generated which you can zip about in and get a sense of what the area is like – or as Google puts it: “Say you’re planning a trip to London and want to figure out the best sights to see and places to eat. With a quick search, you can virtually soar over Westminster to see the neighborhood and stunning architecture of places, like Big Ben, up close. With Google Maps’ helpful information layered on top, you can use the time slider to check out what the area looks like at different times of day and in various weather conditions, and see where the busy spots are.”
It also launched eco-friendly routing, which sound like it does what it implies. As the cost of petrol continues to rise, this might be one of the more practical things to come out of the show
YouTube Chapters
Last year Google launched auto-generated chapters for YouTube videos which is supposed to make it easier to jump to the bits of a video you’re most interested in. Plugging in some multimodal technology from DeepMind, YouTube can now simultaneously use text, audio and video to auto-generate chapters better and faster, apparently. Meanwhile speech recognition models are now being deployed to transcribe videos and add in some automating translating into different languages.
Google Workspace
Google has also aimed its massive AI brains at Google Docs, which now using various natural language processing magic can create automated summaries of great big long documents you can’t be bothered to read. Called TL;DR, the feature will also be rolled out to Google Chat – meaning you can get executive summaries of long winded conversations with friends, which feels a bit off but hey, we’re all busy.
Google Assistant
Some new features were dropped for Google’s AI voice interaction tool, including Look and Talk – which basically means you can look at the screen of a Nest Hub Max and with face reading tech the device will know it’s you and you’ll be able to bark your orders without going through the trauma of having to say ‘Ok Google’ first. What a time to be alive.
LaMDA 2 and AI Test Kitchen
LaMDA 2 is pitched as Google’s ‘most advanced conversational AI yet’. This can presumably be deployed to all sorts of things and it was launched alongside something called AI Test Kitchen. It’s basically for those so included can get their hands on powerful AI resources and develop things.
One application is describes as: “Say you’re writing a story and need some inspirational ideas. Maybe one of your characters is exploring the deep ocean. You can ask what that might feel like. Here LaMDA describes a scene in the Mariana Trench. It even generates follow-up questions on the fly. You can ask LaMDA to imagine what kinds of creatures might live there. Remember, we didn’t hand-program the model for specific topics like submarines or bioluminescence. It synthesized these concepts from its training data. That’s why you can ask about almost any topic: Saturn’s rings or even being on a planet made of ice cream.”
It gets a bit tough to zone in on what is being described here and what specifically it might be useful for, but presumably it’s all good stuff for hardcore programmers. Perhaps there are even those out there that have been holding on for an AI tool to describe a planet made of ice cream for them.
Machine learning hub
Google announced that it is launching the world’s largest, publicly-available machine learning hub for Google Cloud customers at its data centre in Mayes County, Oklahoma. What’s under the hood? Eight Cloud TPU v4 pods, which are apparently custom-built on the same networking infrastructure that powers Google’s largest neural models. This served up nine exaflops of computing power in aggregate, meaning complex models and workloads can be run. Woof.
Android 13
The theme here was security and privacy. SMS text messaging has been upgraded to a new standard called Rich Communication Services (RMS). What this offers is end to end encryption as well as a host of other communication features you don’t get on SMS, but might use on apps like WhatsApp.
Google Wallet will be available on the new Pixel Watch, and Google promises that soon you’ll be able to store things like driver’s licenses, hotel keys and office pass cards onto it.
There was also an emphasis on interoperability in the sense of making tablets, phones, and watches on the Google platform work better together and share files, for example. This is clearly a key part of its strategy to build out a proper ecosystem of gadgets, and interestingly there is also an emphasis on extending the umbrella to other manufacturers: “With the launch of our unified platform with Samsung last year, there are now over three times as many active Wear OS devices as there were last year. Later this year, you’ll start to see more devices powered with Wear OS from Samsung, Fossil Group, Montblanc, Mobvoi and others. And for the first time ever, Google Assistant is coming to Samsung Galaxy watches, starting soon with the Watch4 series.”
And there you have it – the best of the rest of Google I/O. There were other announcements, but those seem to be the key ones. As a non-AI generated TL;DR – the themes were clearly around solidifying Android as platform beyond your phone, increasing security, and using its huge AI resources to tweak and improve anything under its extraordinarily large portfolio of products and services.
About the Author
You May Also Like