Title | SIGNS |
Brand | GERMAN YOUTH ASSOCIATION OF PEOPLE WITH HEARING LOSS |
Product/Service | SIGNS SMART TOOL |
Category |
B02. Non-profit / Foundation-led Education & Awareness |
Entrant
|
MRM//McCANN Frankfurt, GERMANY
|
Idea Creation
|
MRM//McCANN Frankfurt, GERMANY
|
Idea Creation 2
|
McCANN FRANKFURT, GERMANY
|
Production
|
DROPOUT-FILMS Mainz, GERMANY
|
Additional Company
|
DAS WERK Frankfurt, GERMANY
|
Additional Company 2
|
STUDIO FUNK Frankfurt, GERMANY
|
Credits
Sebastian Hardieck |
McCANN Worldgroup Germany |
Chief Creative Officer |
Martin Biela |
MRM//McCANN |
Executive Creative Director, Head of LAB13 EMEA |
Dushan Drakalski |
McCANN Frankfurt |
Executive Creative Director |
Dominik Heinrich |
MRM//McCANN New York |
Vice President Global, Product Innovation & LAB13 |
Mark Hollering |
MRM//McCANN |
Director Creative Technology & LAB13, Germany |
Chris Endecott |
MRM//McCANN |
Senior Copywriter / Concept & LAB13 |
Jan Portz |
MRM//McCANN |
Group Creative Director UX |
Sebastian Klein |
MRM//McCANN |
IT Project Manager |
Jawad Saleem |
McCANN Frankfurt |
Senior Art Director |
Michael Klaiber |
MRM//McCANN |
Senior Motion Designer |
Jerome Cholet |
McCANN Woprldgroup Germany |
PR & Communications Director |
Nico Koehler |
MRM//McCANN |
Art Director |
Sofia Paz-Vivo |
MRM//McCANN |
Trainee LAB13 |
Irini Sidira |
MRM//McCANN |
Junior Art Director |
Klaus Flemmer |
McCANN Worldgroup Germany |
Head of Production |
Michelle Mohring |
German Youth Association of People with Hearing Loss |
Chairwoman |
Lucas Garthe |
German Youth Association of People with Hearing Loss |
Clerk |
Background
There are over 2 billion voice-enabled devices across the globe. According to Gartner 30% of all digital interactions will be non-screen based by 2020. Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. However, every new technology brings new challenges. If voice is the future, what about those who have no voice or do not hear? Conversational Design must be inclusive and address as many target audiences as possible. Around 466 million people worldwide have disabling hearing loss. With the SIGNS Project, we are creating awareness for digital accessibility and inclusion. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. The SIGNS prototype bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (Amazon Alexa, Google Home or Microsoft Cortana).
Describe the creative idea
SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer.
Describe the strategy
Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. SIGNS bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly
with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana).
Describe the execution
SIGNS uses an integrated camera to recognize sign language in real-time and communicates directly with a voice assistant. The system is based on the machine learning framework Google Tensorflow. The result of the pre-trained MobileNet is used to train several KNN classifiers on gestures. The recognition calculates the likelihood of the webcam's recorded gestures and converts into text. The resulting sentences are translated into conventional grammar and sent to a cloud-based service that generates language from it. In other words, the gestures are converted into a data format (text to speech) that the selected voice assistant understands. In this case, shown Amazon Voice Service (AVS). AVS responds with meta and audio data, which in turn is converted from a cloud service to text (text to speech). The result is displayed. SIGNS works on any browser-based operating system that has an integrated camera and can be connected to a voice assistant.
List the results
Project SIGNS created awareness for inclusion in the digital age as well as facilitated access to new technologies. The response of the deaf community was overwhelming. Just like voice, gestures are an intuitive way of communicating, making it extremely relevant for the industry. Not just for the hearing impaired, but for everyone. People think it is awkward to speak to the invisible in public, that’s why we believe that invisible conversational interactions with the digital world are not limited to voice itself. Further we started a cooperation with the German Youth Association of People with Hearing Loss as a partner and extended the usability. Never before a sign language assistant was launched in that quality and with the prospect of becoming a worldwide platform, that can easily be accessible from all over the world, learning new signs and sign languages.