With new AR features in Search, users can view and interact with 3D objects right from Search and place them directly into space. For example, searching for sharks on search will soon display an 18 feet long white shark on your phone, showing how big the subject could be in real life.
At the Google I/O 2019 event in California, Google announced a ton of changes to the software of side of things for the future of mobile interaction. The new features include AR in Google Search, new Google Lens additions, an all-new Android Auto UI, on-device and continuous Assistant, Duplex on the web, Live Relay and impaired speech recognition with AI.
AR in Google Search
With new AR features in Search, users can view and interact with 3D objects right from Search and place them directly into space. For example, searching for sharks on search will soon display an 18 feet long white shark on your phone, showing how big the subject could be in real life. Users will also be able to interact with 3D models and put them into the real world, right from Search. Google has partnered with NASA, New Balance, Samsung, Target, Visible Body, Volvo and Wayfair to add 3D content to search.
More visual answers to Google Lens and Google Go integration
Google is now upgrading the Lens to provide more visual answers to visual questions. For example, Lens can automatically highlight which dishes are popular, directly on the menu. When tapping on a dish, users will see what the dish might actually look like and what people have to say about it, through photos and reviews from Google Maps. Google does so by identifying all the dishes on the menu. Lens will also now be available on Google Go and will weigh in at just 100kb.
Android Auto redesign
Google has announced a redesigning of Android Auto on compatible cars with the new interface, built for useful information at a glance, controlling apps and navigations on the same screen and turn-by-turn directions with fewer interactions. The new notification centre shows recent calls, messages and alerts.
The next generation Google Assistant
Google has demonstrated an all-new speech recognition and language understanding which will run locally on a smartphone with just under gigabyte of data required. This will make the Google Assistant run natively on a phone with zero latency, faster query answering and continuous conversation and multitasking across several apps. This can be explained by a simple example of creating a calendar invite, finding and sharing a photo with your friends, or dictating an email, all by triggering the Assistant once.
Duplex is now on the web
The Mountain View giant which launched the Duplex feature last year is now extending Duplex to the web. The feature can be used to book things online helping users avoid several forms that will need filling up. Duplex on the web will be available later this year in English in the U.S. and U.K. on Android phones with the Assistant.
While Duplex allows users the ability to let the Google Assistant handle booking an appointment or ordering something online, Live Relay will allow users to make and receive phone calls without having to speak or hear. The feature uses on-device speech recognition and text-to-speech conversion to allow the phone to listen and speak on the users’ behalf while they type. By offering instant responses and predictive writing suggestions, Smart Reply and Smart Compose help make typing fast enough to hold a synchronous phone call. Live Relay will be running entirely on the device, keeping calls private and can be used by anyone who can’t speak or hear during a call, and it may be particularly helpful to deaf and hard-of-hearing users.
Project Euphonia for impaired speech recognition
At the I/O event, Google also announced Project Euphonia to improve computers’ abilities to understand diverse speech patterns, such as impaired speech. The company has partnered with several institutions for transcribing words spoken by people with speech difficulties.
You might like this