Title | PETA "EYE TO EYE" |
Brand | PETA DEUTSCHLAND E.V. |
Product/Service | PETA |
Category |
A02. Applied Innovation |
Entrant
|
KOLLE REBBE | ACCENTURE INTERACTIVE Hamburg, GERMANY
|
Idea Creation
|
KOLLE REBBE | ACCENTURE INTERACTIVE Hamburg, GERMANY
|
Idea Creation 2
|
DEMODERN Hamburg, GERMANY
|
Production
|
DEMODERN Hamburg, GERMANY
|
Production 2
|
MARKENFILM Hamburg, GERMANY
|
Production 3
|
GERMAN WAHNSINN Hamburg, GERMANY
|
Credits
Fabian Frese |
Kolle Rebbe GmbH |
Chief Creative Officer |
Andreas Brunsch |
Kolle Rebbe GmbH |
Executive Creative Director |
Nicole Holzenkamp |
Kolle Rebbe GmbH |
Creative Director |
Christian Rentschler |
Kolle Rebbe GmbH |
Creative Director |
Lorenz Ritter |
Kolle Rebbe GmbH |
Creative Director |
Lea Zaydowicz |
Kolle Rebbe GmbH |
Account Manager |
Martin Bergmann |
Kolle Rebbe GmbH |
Art Director |
Max Wort |
Kolle Rebbe GmbH |
Copywriter |
Rachel Hoffmann |
Kolle Rebbe GmbH |
Agency Producer |
Ole Brand |
Kolle Rebbe GmbH |
Agency Producer |
Falko Tilgner |
Kolle Rebbe GmbH |
Editor, Grading |
Alexander Hildenberg |
Kolle Rebbe GmbH |
Cutter, Grading |
Alexander El-Meligi |
Demodern GmbH |
Managing Director, Creative Director |
Deborah Montag |
Demodern GmbH |
Project Manager, Digital Producer |
Daniel Harrison |
Demodern GmbH |
3D Artist |
Mirko Wiedmer |
Demodern GmbH |
3D Artist |
Bastian Hantsch |
Demodern GmbH |
3D Artist |
Robin Janitz |
Demodern GmbH |
Designer, Art Director |
Michael Schmück |
Demodern GmbH |
UX Designer (mobile only) |
Christopher Baumbach |
Demodern GmbH |
Lead Creative Engineer |
Franziska Neu |
Demodern GmbH |
Creative Engineer |
Sebastian Schuchmann |
Demodern GmbH |
Creative Engineer |
Sam Bäumer |
Demodern GmbH |
Creative Engineer (mobile only) |
Michael Sturm |
Demodern GmbH |
Web Developer (mobile only) |
Juliane Geusendam |
Markenfilm GmbH & Co. KG |
Producer |
Frank Schlotterbeck |
Markenfilm GmbH & Co. KG |
Director |
Philipp Feit |
German Wahnsinn GmbH |
Audio Engineer |
Why is this work relevant for Innovation?
The world’s first real dialogue with an animal was a one-of-a-kind experience that activated people’s empathy in completely new ways. The eye-level encounter made users realise how deeply related humans and animals actually are. By combining the immersive power of VR with the live acting of a psychologically trained PETA activist, we created deep conversations with an animal for the first time. During the dialogue, the rabbit encouraged people to rethink the way they treat animals and to become active themselves. An innovative experiment that brought a new quality to animal rights communication and VR.
Background
PETA, the worlds biggest NGO for animal rights, wanted to change its communication. Because brutal images of tortured animals and an accusing tone of voice repel people instead of making them stop and think. The idea of Eye to Eye was to change the communication from in-your-face to face-to-face in order to make people rethink their consumer behaviour towards areas ranging from food to fashion and beauty. Because all of these are largely based on the suffering of living creatures. The production team consisted of concepters and copywriters as well as 3D artists and developers with their roots in the gaming industry or CGI for films. This allowed us to combine the best of these sectors and find creative solutions for tough problems.
Describe the idea
What if mankind could exchange thoughts with an animal? What if it could ask us inconvenient questions and make us realise how deeply related animals and humans actually are? What would we say if the animal wants to know why we kill and eat it? Empathy grows through personal dialogue. If you understand your opponent, you open up to them and are more willing to take their point of view and change your own. The idea was to engage humans in an eye-level conversation with an animal, to make people rethink their consumer behaviour towards areas ranging from food to fashion and beauty. Because all of these are largely based on the suffering of living creatures. And because rabbits are exploited for food, clothing, experimentation, hunting and entertainment, a bunny is the perfect ambassador for animal rights.
What were the key dates in the development process?
? When the rabbit had fur
? When the motion tracking system worked together with the rabbit model
? When the complete setup (with audio, graphics, etc.) was functional
? The psychological training with the PETA activist controlling the rabbit
? Watching the first users go through the experience
? Installing the setup in the truck
The biggest moments were when certain parts, that we worked so hard on, finally came together. Seeing the rabbit with accurately groomed and nicely textured fur for the first time was one of those moments. When we implemented the first working solution for real-time face and bodytracking was also a great moment, because we finally knew how to tackle this challenge. By far the most exciting milestone during development was combining VR with live acting, as the motion tracking technology and the visually polished rabbit model finally worked together. Seeing the protagonist of our project come to life was really thrilling. Finally, watching actual users going through the experience and seeing how well it works, was relieving payoff for our hard work.
Describe the innovation/technology
Undoubtedly, the biggest challenge was the requirement that everything had to happen in real time, because there didn’t seem to be a similar project we could have learned from, therefore we had to invent certain solutions to solve completely new problems. The actor wears the Perception Neuron body-tracking suit and a helmet with a mounted GoPro camera. These are the key components for the motion tracking, whose data gets fed into a separate computer for preprocessing. The processed tracking data is then transmitted to another computer which is responsible for rendering the 3D image of the experience. Both participants, the actor and the user are communicating via microphones. On the user side, the microphone is well-hidden and virtually invisible. The actual experience is done by a slightly customised version of the Unreal Engine, utilising NVIDIA HairWorks to render the rabbit’s fur and NVIDIA VRWorks to increase rendering performance. Because we were aiming for top-notch visual quality, the rendering computer is equipped with two NVIDIA GTX1080Ti graphic cards. Although the rendering hardware is extremely powerful, we also had to optimise our own application (primarily shading and lighting) to achieve high visual quality while maintaining the necessary performance.
Describe the expectations/outcome
The idea was to engage people in 3 to 5-minute chat with an animal. To our surprise, the average dialogue ended up lasting 12 minutes. People completely forgot that they were talking to a virtual creature. Their reactions were much more emotional than we had dared to hope for. They opened up and talked about their beliefs, personal loss, fears and hopes. And they listened. Most of them described their experience as touching and stated that they would rethink their consumer behaviour. Some even became activists themself. With 3 million online views, the clip of the experiment too exceeded our expectations. Moreover, a lot of media buzz helped to spread the rabbit’s message even further.