Silver eurobest Award

Demo Film

Presentation Image

Category E01. New Realities & Voice Activation
Entrant MRM//McCANN Frankfurt, GERMANY
Idea Creation MRM//McCANN Frankfurt, GERMANY
Additional Company DAS WERK Frankfurt, GERMANY
Additional Company 2 STUDIO FUNK Frankfurt, GERMANY
Name Company Position
Sebastian Hardieck McCANN Worldgroup Germany Chief Creative Officer
Martin Biela MRM//McCANN Executive Creative Director, Head of LAB13 EMEA
Dushan Drakalski McCANN Frankfurt Executive Creative Director
Dominik Heinrich MRM//McCANN New York Vice President Global, Product Innovation & LAB13
Mark Hollering MRM//McCANN Director Creative Technology & LAB13, Germany
Chris Endecott MRM//McCANN Senior Copywriter / Concept & LAB13
Jan Portz MRM//McCANN Group Creative Director UX
Sebastian Klein MRM//McCANN IT Project Manager
Jawad Saleem McCANN Frankfurt Senior Art Director
Michael Klaiber MRM//McCANN Senior Motion Designer
Jerome Cholet McCANN Woprldgroup Germany PR & Communications Director
Nico Koehler MRM//McCANN Art Director
Sofia Paz-Vivo MRM//McCANN Trainee LAB13
Irini Sidira MRM//McCANN Junior Art Director
Klaus Flemmer McCANN Worldgroup Germany Head of Production
Michelle Mohring German Youth Association of People with Hearing Loss Chairwoman
Lucas Garthe German Youth Association of People with Hearing Loss Clerk

Describe the creative idea

Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? Around 466 million people worldwide have disabling hearing loss. With the SIGNS Project, we are creating awareness for digital accessibility and inclusion. SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer.

Describe the execution

Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. SIGNS bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer. SIGNS is based on an intelligent machine learning framework that is trained to identify body gestures with the help of an integrated camera. These gestures are converted into a data format that the selected voice assistant understands. The voice assistant processes the data in real-time and replies appropriately back to SIGNS. The answer is then immediately either displayed in text form or via visual feedback. SIGNS replaces voice assistants’ typical form of communication through audio signals with a visuality – but not only by displaying the word. The visual interface of SIGNS fulfills various requirements that are necessary for an intuitive experience. SIGNS follows the basic principles of sign language. Therefore, the SIGNS dictionary was developed – a set of symbols that are inspired by the hand movements. Just like with other voice assistant devices the user has to naturally interact with the device.