Renault Megane Experience

A Renault Megane car on a country road.

For the launch of the All-new Renault Megane, according to the new TV Spot’s base line : “Activate your sensations”, we have created an interactive experience, led by We Are Social, using test drive data which give an IRL meaning to the whole Renault Megane Experience campaign. Hundreds of drivers performed three tests on race tracks with the new Renault Megane. We analyzed and merged the data we obtained from the car and the drivers. The test drives can be viewed on data.nouvellemegane.renault.fr

When you view a test drive, you see real time data from the car and the driver. We created a specific architecture for the front and back end in order to enable the user to switch to another test driver at any time based on the emotions he wants to see.

Dataviz of Renault Megane's collected data while driving.

For the launch of the All-new Renault Megane, according to the new TV Spot’s base line : “Activate your sensations”, we have created an interactive experience, led by We Are Social, using test drive data which give an IRL meaning to the whole Renault Megane Experience campaign. Hundreds of drivers performed three tests on race tracks with the new Renault Megane. We analyzed and merged the data we obtained from the car and the drivers. The test drives can be viewed on data.nouvellemegane.renault.fr

When you view a test drive, you see real time data from the car and the driver. We created a specific architecture for the front and back end in order to enable the user to switch to another test driver at any time based on the emotions he wants to see.

Front-end architecture and data

Since a couple of projects now, our front-end architecture is component based using VueJS. This allow us to slice the project into multiple pieces and work with efficiency :

  • composability / separation of tasks
  • maintainability ( if a piece is broken just change this piece )
  • reusability
FLUX model.

On the data flow, we used the FLUX model. This is a good choice when you have a lot of data to handle and display them in many views. This model allow to have one source of truth for the data and then connect all the views to it. Therefore, if the « store « of data is updated, all the views connected to it will update as well.

Video sync

TL;DR You can not rely on HTML5 video events

Video synchronization is a key feature of the project as we are displaying two videos of the same person but with different cameras. We tried to synchronize them by listening to the video events to find out if a video was buffering. Sounds like a good idea.
Reality is :
- The suspend event were our biggest hope, but it actually doesn’t act like it should.
- Once the video started, the canplay / canplaythrough / loadedmetadata events are effectively emitted but never again.
- The pause event is triggered only when user pauses the video, not when the video was paused because of buffering.

You can check those behaviors here. Now add browser support to this and…

GIF of a giant panda trashing the office

Tricks to the rescue

We ended up with two solutions :

  • on each frame ( requestAnimationFrame ), check the currentTime continually against the buffer length
  • on each frame, check if current timestamp minus "last time the video was updated" is inferior to a certain threshold
GIF of former President Obama saying "not bad".

Those solutions worked as expected, across different browsers.

Front-end performances

GPU

With data sync and video sync on each frame, the CPU can be quickly overwhelmed. So we had to move as much as possible animations / transitions to the GPU :
- WebGL to render particles ( with a canvas fallback )
- Every single DOM node that has to move on the screen has to move using CSS transition/animation

Two animated graphs.

Hosting architecture

As with all our projects, we use Amazon Web Services (AWS) to host our websites. AWS provide a useful set of services that we needed for this experience in order to :
- import all the test drivers videos
- import all the data from the test drives
- create a fast and reliable website and API

We put all the raw data on a S3 bucket (source bucket). This bucket triggers events to execute lambda functions. AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume , there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service , all with zero administration.

Video

For the video, the lambda functions create many jobs on elastic transcoder. We have created a specific pipeline and different presets to transcode the videos into different resolutions/bitrates. We are able to transcode a large set of videos in a short time without using ec2 instances. Once the video is transcoded, it’s uploaded on a different S3 bucket and available to the end user with cloudfront (CDN).

Graphic of the AWS infrastructure.

JSON

For all the data provided by the sensors (car and user sensors), we obtain large JSON files (15mo to 25mo) for each session. We automatically split the JSON file into smaller files and calculate the average data per test. For each test driver, we take data from the sensors every 100ms. A 100ms dataset looks like this :

{“heart_rate”: 0, “rpm”: 862, “percent”: 0.0, “neutral”: 76, “engine_load”: 32, “braking_position”: 0, “throttle_position”: 3, “surprise”: 20, “fear”: 3, “speed”: 0, “happiness”: 1}

We get large datasets for each user (nearly 1000 rows per user).

All of the data is imported with lambda functions into our DynamoDB database. Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models.

Graphic of the AWS infrastructure.

We use two DynamoDB tables :
- one with each row representing user dataset obtained at 100ms intervals.
- one with each row representing data from the entire user test.

The first table is used to find a test driver that matches the sensation the user requests. The second table is used to retrieve the entire test driver dataset for a specific test. To limit the transfer size, all the data from the second table is stored in a zipped format.

API and website

The website and the API are hosted with Elastic Beanstalk. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. The API retrieves the data from DynamoDB and the videos are broadcasted with Cloudfront.

Graphic of the AWS infrastructure.

We hope you enjoyed the reading.

--

Client : Renault
Agency : We Are Social
Data provider : Lost Mechanics | Article about the data catching