AWS Public Sector Blog
Supporting people with hearing loss through cloud-enabled solutions
In 2021, one in six Australians—almost four million people—have hearing loss, ranging from mild to profound. The statistic is part of the larger global picture reported by the World Health Organization (WHO) that approximately 466 million people live with hearing loss; of these, 34 million are children. In addition, 1.1 billion young people are at risk of hearing loss due to exposure to noise in recreational settings and through personal audio devices. The WHO expects that these numbers will rise remarkably in the coming decades unless action is taken to prevent and treat hearing loss.
The Australian government recently launched the Roadmap for Hearing Health for Australia. It aims to address the hearing loss challenges, including prevention, raising awareness, improving the workforce numbers, and addressing the Otitis media epidemic in Aboriginal and Torres Strait Islander people’s children. Even though this roadmap is a start, we must go further to create an environment where access to universal communication is the norm for people with hearing loss. The cloud and other technologies from Amazon Web Services (AWS) can help improve communications accessibility.
The advent of live captioning and the use of video interpreters are starting to address the challenges of communications. But providing flexibility and stability in all situations can help further improve accessibility. Smartphone apps help, but they often cannot pick up sound clearly, and the captions are not always high quality. In events, theatres, and conferences, light-emitting smartphones can be a distraction to other patrons. In medical facilities, legal facilities, and education settings, information translated by captions or interpreters needs to be correct and secure. Captioning and interpreters can also be expensive or difficult to secure, which often causes organizations to forgo these services.
AWS offers services that will help organizations build end-to-end solutions with accessibility in mind and improve day-to-day activities such as social interactions, clinical consultations, live media, and public service announcements.
Adding speech-to-text capabilities
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it simple to add speech-to-text capabilities to your applications. In media, broadcasters can use it for subtitling live broadcasts of the news or other programs. In seminars and conferences, users with hearing loss can leverage real-time transcription to capture session notes on the fly. Amazon Transcribe can also accurately label the speaker in a group conversation. A person with hearing loss can clearly distinguish the speakers in the live business consultation with multiple speakers.
Amazon Transcribe Medical comes with medical speech-to-text capabilities and can transcribe clinical conversations like telemedicine consultations or physician-to-patient conversations. When face masks limit impairs visual cues for the patient or the family member with a hearing loss, this service can assist the user with live medical transcriptions in hospitals and medical clinics.
Capturing more accurate translation
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Neural machine translation is a form of translation automation that uses deep learning models to provide more accurate and more natural sounding translation than traditional statistical and rule-based translation algorithms. It allows the localization of content. When paired with speech-to-text services, it enables cross-lingual communication between users. For example, an international traveler with hearing loss can understand localized public announcements in another language on public transport or venues.
Uncover insights from conversation
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning (ML) to find insights and text relationships. The service can extract people, places, sentiments, and topics in unstructured data from conversations. For example, a student with hearing loss can be presented with additional insights to debate in a classroom setting.
Amazon Comprehend Medical is a NLP service that uses ML to extract relevant medical information from unstructured text. The service can identify medical information, such as medical conditions, medications, dosages, strengths, and frequencies, from various sources like in-person medical consultation or telemedicine transcriptions. These capabilities can empower users with hearing loss to get a deeper understanding of the clinical conversations and make informed decisions.
Looking ahead
As individuals and as a society, we care about challenges faced by people with hearing loss. Though technology has rapidly evolved in recent years, we still need to enable everyone with mechanisms that leverage innovative technologies. The objectives from the Roadmap for Hearing Health for Australia and similar initiatives worldwide won’t be accomplished without the support of effective digital solutions to address the stigma of hearing health and hearing loss in the community.
Let’s create an environment where communications access is the norm so that people with hearing loss can live the most inclusive lives possible to realise their potential in life beyond just the technology on the ears. Even better, create access for people with a hearing loss a step into creating universal communications for all.