Each year, Google hosts its I/O keynote event where they share their upcoming innovative hardware and software updates with the world. This year, the keynote was held at the Shoreline Amphitheatre of Mountain View, California and yet again, Google unveiled a whole host of changes and updates across their product line and operating systems.

Introduction  

The event kicked off with speaker Sundar Pichai reiterating Google’s mission to ‘organise the world’s information and make it universally accessible and useful’ and how this translates into what Google does and what they’re trying to do. He mentions that Google has transformed from a company that helps with finances to a company that ‘helps you get things done’, and how they are achieving this new rhetoric with the help of their new products and services. Sundar moves on to say that building a more helpful Google starts with search, the core aspect of Google, and demonstrates some of the changes coming to search, starting with more detailed news and podcast results. Sundar claims that what’s most helpful and understanding in the world is ‘visual information’, which leads onto the first major highlight of the event!

Visual information – AR in search

The first innovative feature shown off was the inclusion of augmented reality right into search results. Now, you can get 3D models within search results that can be brought into the real world with the help of your device’s camera. Google’s AR and Google Lens VP Aparna Chennapragada demonstrated the new feature to watching crowd where she brought to life a human muscle, a pair of shoes and a terrifying great white shark! These 3D models aren’t just to play with however, they can be used to help with shopping (seeing how a pair of shoes look with a particular outfit), or with education (seeing how a certain muscle flexes and extends). Google has partnered up with many organisations such as NASA and New Balance to integrate these AR images into your results.

A smarter Google Lens

Google Lens is the AI assistant that works with your smartphone camera provide users with ‘visual answers to visual questions.’ Google has made improvements to the Google Lens so that it can now leverage Google’s extensive amount of data to help you choose what to eat at a restaurant for example, simply by pointing your smartphone’s camera at the menu. Here, Google Lens will display the most popular dishes, images of what they look like and customer reviews. It can even help with splitting the bill! Further demonstrations are shown in the integration of Google Lens with Translate. Here, Aparna demonstrates how Google Lens can help with translations of unfamiliar text with providing not just text translations but with audio, highlighting each word as it’s read out to give you more context on what the text means

Assistant becomes more helpful

Thanks to developments within recurrent neural networks, Google has been able to create new speech recognition and language understanding models to make Assistant perform locally on your smartphone. This has resulted in speech procession at near zero latency and transcription in real time, even completely offline. The new and improved Assistant will now understand your questions and queries and provide you with answers up to 10-times faster. Multitasking across several apps is now possible with Continued Conversation, where you can make a number of requests in a row without the need to say “Hey Google” every time. The Assistant will now use what it learns about you to better understand the context of what you say and offer more personalised suggestions as it learns your daily routine and preferences in entertainment, style and food. A new ‘Driving Mode’ will also provide hands-free assistance when you’re navigating the roads with Maps or Waze.

 

Duplex can now use the Internet

Google introduced Duplex to the world last year in extraordinary fashion, in which Google Assistant was able to make phone calls for you to perform services like book a table at a restaurant or a hair appointment. This year, Google announced that Duplex has learned to use the Internet! Demonstrated on stage where we see Assistant book a rental car, Duplex will filter through the booking site, entering all of your details as it navigates – there goes the ‘I am not a robot’ defence!

 

Auto-delete Google’s data collection

With the recent hysteria and spotlight over privacy and data, Google has made an attempt to curb the user’s worries by allowing them to micro-manage what data Google holds. Coming in the not so distant future to Location History and Web & App Activity is the new ability to select a time limit for how long you want your activity data to be saved (3-18 months) and any data older than the selected time will be automatically deleted from your account.

New low-budget Pixels

Despite the popularity of the Google Pixel range, Google realised that the pure Android smartphones ended up being priced above the affordable levels that made so many customers fall in love with the Nexus phones many years ago. As a result, Google is going back to its Nexus roots with new low budget Pixel devices. In addition to the new devices, Pixel owners will also benefit from the new features such as the anticipated AR directions in Google Maps, robocall screening and enhanced battery life.

Google Home and Nest combine

One of the final announcements of the event saw the introduction of the new Google Nest device, which sees the power of the Google Home and Nest combine. Here, your Google and Nest accounts will combine to give you greater control and security over your smart home devices. As part of this merger, the Google Home Hub has now become Google Nest Hub.

And there it is! All of the major announcements from this year’s Google I/O event! For more information and details, visit the official site dedicated to the event here - https://events.google.com/io/schedule/events/