SIGNS

Silver eurobest Award

Case Film

Presentation Image

TitleSIGNS
BrandGERMAN YOUTH ASSOCIATION OF PEOPLE WITH HEARING LOSS
Product/ServiceSIGNS SMART TOOL
Category A07. Not-for-profit / Charity / Governemt
Entrant MRM//McCANN Frankfurt, GERMANY
Idea Creation MRM//McCANN Frankfurt, GERMANY
Idea Creation 2 McCANN FRANKFURT, GERMANY
Production DROPOUT-FILMS Mainz, GERMANY
Additional Company DAS WERK Frankfurt, GERMANY
Additional Company 2 STUDIO FUNK Frankfurt, GERMANY
Credits
Name Company Position
Sebastian Hardieck McCANN Worldgroup Germany Chief Creative Officer
Martin Biela MRM//McCANN Executive Creative Director, Head of LAB13 EMEA
Dushan Drakalski McCANN Frankfurt Executive Creative Director
Dominik Heinrich MRM//McCANN New York Vice President Global, Product Innovation & LAB13
Mark Hollering MRM//McCANN Director Creative Technology & LAB13, Germany
Chris Endecott MRM//McCANN Senior Copywriter / Concept & LAB13
Jan Portz MRM//McCANN Group Creative Director UX
Sebastian Klein MRM//McCANN IT Project Manager
Jawad Saleem McCANN Frankfurt Senior Art Director
Michael Klaiber MRM//McCANN Senior Motion Designer
Jerome Cholet McCANN Woprldgroup Germany PR & Communications Director
Nico Koehler MRM//McCANN Art Director
Sofia Paz-Vivo MRM//McCANN Trainee LAB13
Irini Sidira MRM//McCANN Junior Art Director
Klaus Flemmer McCANN Worldgroup Germany Head of Production
Michelle Mohring German Youth Association of People with Hearing Loss Chairwoman
Lucas Garthe German Youth Association of People with Hearing Loss Clerk

Background

There are over 2 billion voice-enabled devices across the globe. According to Gartner 30% of all digital interactions will be non-screen based by 2020. Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. However, every new technology brings new challenges. If voice is the future, what about those who have no voice or do not hear? Conversational Design must be inclusive and address as many target audiences as possible. Around 466 million people worldwide have disabling hearing loss. With the SIGNS Project, we are creating awareness for digital accessibility and inclusion. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. The SIGNS prototype bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (Amazon Alexa, Google Home or Microsoft Cortana).

Describe the strategy

SIGNS was pre-trained with video footage of people who use sign language. SIGNS includes a training interface that can be used to teach new gestures in real time. SIGNS then recognizes these gestures and acts as an interface to voice assistant systems such as Amazon Alexa, Google Home or Microsoft Cortana. SIGNS is based on an intelligent machine learning framework that is trained to identify body gestures with the help of an integrated camera. These gestures are converted into a data format that the voice assistant service understands. The voice assistant processes the data in real-time and replies appropriately. SIGNS replaces voice assistants’ typical form of communication through audio signals with a visuality – but not only by displaying the word. The visual interface of SIGNS fulfills various requirements that are necessary for an intuitive experience. SIGNS follows the basic principles of sign language. Therefore, the SIGNS dictionary was developed

Describe the execution

Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. SIGNS bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer. SIGNS is based on an intelligent machine learning framework that is trained to identify body gestures with the help of an integrated camera. These gestures are converted into a data format that the selected voice assistant understands. The voice assistant processes the data in real-time and replies appropriately back to SIGNS. The answer is then immediately either displayed in text form or via

List the results

The goal is to make SIGNS available on all assistants and to all hearing-impaired people. In the first release, end of Q4 SIGNS will launch on Windows and MacOS with Amazon Alexa connectivity and limited gestures. By Q2 / 20, the connector to Google Assistant is planned, Q3 / 20 Microsoft Cortana. By Q4 / 20 there will be a crowd-based dictionary with which the community can contribute to the vocabulary. According to Gartner 30% of all digital interactions will be non- screen based by 2020. Just like voice, gestures are an intuitive way of communicating, making it extremely relevant for the industry. Not just for the hearing impaired, but for everyone. People think it is awkward to speak to the invisible in public, that’s why we believe that invisible conversational interactions with the digital world are not limited to voice itself.