Google announced three new artificial intelligence (AI) based features in Google I/O 2019, namely the Live Captions, Live Relay and Project Ephonia to help make its technology reach the people who are suffering from hearing or speech impairments.
Google announced three new artificial intelligence (AI) based features in Google I/O 2019, namely the Live Captions, Live Relay and Project Ephonia to help make its technology reach the people who are suffering from hearing or speech impairments. We have a look at these new features by the technology giant.
Live Captions
Google announced the Live Captions feature on their upcoming Android Q operating system (OS) for those with hearing impairments.
With the help of this feature, any media with audio which gets played on the smartphone will get captioned automatically. As soon as speech is detected by the smartphone, the captions will appear on the screen, without the need for a Wifi or mobile data.
“For 466 million deaf and hard of hearing people around the world, captions are more than a convenience—they make content more accessible. We worked closely with the Deaf community to develop a feature that would improve access to digital media,” Google said in a blog post.
The tech giant said that the Live Caption feature works with videos, podcasts and audio messages, across any app, even for audios or videos which the users record on their own.
Live Relay
Google’s Live Relay feature will rely on-device speech recognition and text-to-speech conversion to allow the device and to listen and speak on the user’s behalf while they type. The Live Relay feature runs entirely on the device, keeping calls private. Since Live Relay is interacting with the other side via a regular phone call, it will work without the need for any WiFi or mobile data.
Google explained that the other party can even use a landline to talk. Live Relay feature is still in the research phase and the company has not given any specific time for when it might be released to the public.
Project Euphonia announced by Google aims at people with speech impairments caused by neurological conditions such as stroke, amyotrophic lateral sclerosis (ALS), multiple sclerosis, traumatic brain injuries and Parkinson’s.
Google is using AI to improve the computer’s abilities to understand diverse speech patterns, such as impaired speech. Since speech-based interfaces such as Google Assistant are built to respond to the majority of voices, it does not work for people having speech impairments.
For this, Google has partnered with ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI) to record the voices of people who have ALS, a neurodegenerative condition that can result in the inability to speak and move.
The technology giant collaborated with them to learn about the communication needs of people with ALS and optimise AI-based algorithms so that mobiles and computers can be more reliable in transcribing words spoken by people with such speech difficulties.
No comments:
Post a Comment