John Tregoning


Bullet Time SFX Using Nothing but Web Tech

bulletTime.js is a web application that attempts to replicate the special effect 'Bullet Time' famously introduced in the movie: The Matrix.

Example output from bulleTime.js

The proliferation of advanced web technologies mainly: getUserMedia/WebRTC and Web Sockets created a unique opportunity to develop a web application that can potentially replicate the famous effect.

During Netflix's Hack Day, I had the opportunity to spend a few hours developing an early prototype of the concept. The main idea is to use an array of laptops, and take advantage of their webcam, native display and connectivity features.


The infrastructure is composed of:

bulletTime.js Infrastructure Diagram Cool looking digram that doesn't tell you much

First Challenge: Setup:

The physical setup of the machines is the trickiest part, the correct angle (pitch and yaw) for each device is crucial (otherwise the effect is broken and you end up with something like this)

The master machine needs to be aware of the position (index) of each worker (to be able to interlace the photos in the correct order). In order to access its webcam each worker needs to grant access permission via the browser.

I attempted to solve some of these challenges by dynamically allocating the index to each worker/camera every time a worker page is loaded. The page will display its index in a huge font that almost literally takes the whole screen (i.e. 100vh). Further each worker that gets added to the system is instantly displayed on the master dashboard. Said dashboard also conveniently displays the status of each worker stating whether access to the webcam has been granted or not.

Setup example: (in this video all the workers are on the same machine, obviously when actually doing this, each worker needs to be on a different device)

Second Challenge: Synchronisation

Triggering all the photos at the exact moment in all devices is paramount for the effect to work (here is somewhat comedic example of when there is a small delay between shots). Achieving synchronicity between all the machines turned out to be very difficult and it looks like it's known to be a hard problem

I tried a few different techniques while trying to overcome this issue:


I attempted to use sound as a trigger (in the hope of removing network latency/congestion from the problem) I ended up generating a DTMF tone on my trigger device and this javascript implementation of the Goertzel algorithm running on each worker device. The idea was to process the incoming audio from the microphone waiting for the specific DTMF tone. Unfortunately this proved to be the most unreliable method. Different machines processed the incoming sound at vastly different speeds (I'm guessing due to CPU/microphone differences)

NTP (Network Time Protocol)

Another technique I attempted to use was relying on NTP. Basically giving the machines a specific time in the near future (a couple of seconds) when they needed to take a shot. Unfortunately it looks like NTP in practice does not provide millisecond level precision (at least over WIFI)

Web Sockets

This is the technique I ended up using. It was somewhat flaky since the devices were connected via WIFI and as a result network traffic vastly affected how precise the synchronization was. It did however produce acceptable results even if it was not super reliable.

Potential Alternatives:

As explained, Web Sockets was the least crappy solution. There are a few more things that I would like to try including:

Third Challenge: Webcams

While a quick trip to Netflix's IT department granted me with 7 Macs + mine (4 Retina Pros, 3 Air, 1 Pro 13... how cool is Netflix's IT department?) it turned out the quality, color/hues, distortions were different (even within the same models).

This is most likely because the laptops were 'loaners' so they have been heavily used. Unfortunately I didn't notice this while shooting, as a result I ended up manually removing the bad frames from the GIF after the fact). Here are a few examples of bad frames, and how they affect the final result: Reference Frame, Bad quality/blurry Frame, Bad colour/hue and final result (including the bad frames)


There are a few things that I'm planning on improving in the near future. One of the most important ones is to modify what each worker screen displays. The idea is to show a semi-transparent feed of what the direct sibling worker (n-1) device is seeing on top of its current camera feed. This will go a long way in making the setup easier and the final result less jumpy.

Using phones would yield substantially better results for a few reasons: They have much better cameras than laptops, it's easier to find several units of the exact same model/version and finally, it drastically reduces the distance between the lenses (thus making a smoother effect)

Mobile Android already supports the required web technologies, unfortunately Safari does not, leaving (at least for now) iPhones out of the fun.

Isn't the Open Web awesome? This whole thing was designed, coded and field tested during a hack day. I'm pretty sure that I can yield significantly better results with a few tweaks. I will report back whenever I get some the time/opportunity to try again (if you try it, please let me know)

Finally here are a few photos of the actual, setup and a "best of" video. The source code for bulletTime.js is available in Github here, obviously pull-requests are welcome.

Facebook's Paper photo tilt in HTML5

Facebook will be releasing their new Paper app in a few days. After watching their preview video I was struck by the new gestures/UX it introduced.

Facebook's Paper app in action

While watching the video, I was thinking how I could replicate some of its features using plain HTML, CSS and Javascript.

I decided to start with the photo tilt feature, it turned out to be relatively simple to implement (at least based on what I have seen in the video)

I just pushed to Github a tool/hack called PhotoTilt which, given a photo and a container, will replicate Paper's tilt functionality.

One advantage that native currently has over the mobile web is that there is no way to stop the device from changing orientation (portrait/landscape) when tilting too much. There is an experimental feature: Screen.lockOrientation but currently only works in Firefox for web pages that are in full-screen mode.

Finally, here is a working demo. Make sure you test this on a device with a triaxial/accelerometer, like a phone/tablet, with orientation locked in portrait mode.

See Comments

Performance Comparison of Web Map Services

I have been working on a little side project in my spare time that uses maps. I was appalled at the performance of my web app on mobile (specially when using cell data). I did a little digging and I found that most of the lag was not coming from my app, instead it was coming from Google Maps which, in my Nexus phone, was loading over 1MB of images, 4 custom web fonts and making a whopping total of 47 HTTP calls.

Don't get me wrong, I am a fan of Google maps but for such a key/mainstream product I find it surprisingly heavy. I decided to look into different map providers, do a bit of measuring and see how well other providers stack against Google Maps.

In the interest of fairness, it should be noted that the feature set between these providers is not the same. However, I tested/measured for what I believe to be the most common map use-case: Displaying a map around my location and highlighting stuff with pins drops/markers.

These are the different providers (and actual pages) that I used for the test. All these maps look and feel the same:

Note: The chart data does not include or take into account the page where I loaded the map from. (i.e. the chart data is purely the map data)

Based on these numbers, I plan to refactor my app to use Mapbox instead of Google Maps as, at least for now, it outperforms the latter significantly.

Finally If you want to see some raw numbers, I have published this spreadsheet, I have also made the source of the different map pages used for this test available in Github here.

maps visual comparison See Comments

The Origin of YouTube's Logo

I didn't know that YouTube's logo was inspired by the Caltrain logo. It is remarkable how similar both logos are, especially when comparing them side by side.

YouTube logo Caltrain logo

Source: An interview with Chad Hurley (YouTube co-founder) you can see the exact moment when he mentions this on this video

See Comments

Javascript Lint Pre-commit Script

I've created a GIT pre-commit script called "Sgt Donowitz" (named after the character in the movie Inglorious Bastards). The script enforces Javascript code quality and consistency by using JSHint.

The script runs all new or modified Javascript files against JSHint for every commit; if it finds any issues it simply aborts the commit.

Sgt Donowitz in LEGO form Photo by pasukaru76

You can configure what rules you want to enforce/ignore by using the config file ".jshintrc" (see here for all the available options).

You can also set rules to ignore any file or directory (e.g. 3rd party libraries) by using ".jshintignore" (it works similarly to ".gitignore" files).

Nowadays you can integrate JS Linting into most IDEs. Whilst this is true, there is nothing stoping you from committing code that doesn't pass JS Linting. That's where Sgt Donowitz comes in: it ensures code goodness remains high at all times by blocking bad/rushed code from being commited.

In a nutshell, Sgt Donowitz will force you to keep your JS code quality high and consistent. To setup the pre-commit script, simply run the following command from your project's root:

curl -s | sh

The source code is published on Github feel free to tweak it, create an issue or leave a comment... In the meantime: "The beatings will continue until morale improves".

See Comments