Frame by frame video effects using HTML5 canvas and video

27 Sep 2012 by David Corvoysier

As it has been illustrated with talent in a famous craftymind article, the HTML5 video element can be used as a source input to draw frames into a canvas element to perform live video post-processing.

This post gives another demonstration of what can be achieved using this powerful technique, by applying live image filters and effects on a running video.

Design principles

The frames rendered by a video element are captured at regular intervals by a canvas element used as a framebuffer, and the resulting images are post-processed through javascript before being displayed on-screen inside another canvas used as a viewport.

I use the JSManipulate library by Joel Besada for frames post-processing.

It is an image filter and effects library written in Javascript for client-side manipulation of images on a web page.

Implementation details

The code for this demonstration looks like this:

function frameConverter(video,canvas) {

    // Set up our frame converter
    this.video = video;
    this.viewport = canvas.getContext("2d");
    this.width = canvas.width;
    this.height = canvas.height;
    // Create the frame-buffer canvas
    this.framebuffer = document.createElement("canvas");
    this.framebuffer.width = this.width;
    this.framebuffer.height = this.height;
    this.ctx = this.framebuffer.getContext("2d");
    // Default video effect is blur
    this.effect = JSManipulate.blur;
    // This variable used to pass ourself to event call-backs
    var self = this;
    // Start rendering when the video is playing
    this.video.addEventListener("play", function() {
        self.render();
      }, false);
      
    // Change the image effect to be applied  
    this.setEffect = function(effect){
      if(effect in JSManipulate){
          this.effect = JSManipulate[effect];
      }
    }

    // Rendering call-back
    this.render = function() {
        if (this.video.paused || this.video.ended) {
          return;
        }
        this.renderFrame();
        var self = this;
        // Render every 10 ms
        setTimeout(function () {
            self.render();
          }, 10);
    };

    // Compute and display the next frame 
    this.renderFrame = function() {
        // Acquire a video frame from the video element
        this.ctx.drawImage(this.video, 0, 0, this.video.videoWidth,
                    this.video.videoHeight,0,0,this.width, this.height);
        var data = this.ctx.getImageData(0, 0, this.width, this.height);
        // Apply image effect
        this.effect.filter(data,this.effect.defaultValues);
        // Render to viewport
        this.viewport.putImageData(data, 0, 0);
    return;
    };
};

// Initialization code
video = document.getElementById("video");
canvas = document.getElementById("canvas");
fc = new frameConverter(video,canvas);
...
// Change the image effect applied to the video
fc.setEffect('edge detection');

As you can see, the prerequisites are to either reference or allocate a video element to be used as a source for video frames, and a canvas element to be used a viewport to display transformed frames.

The frame by frame conversion is performed here by a dedicated javascript object (frameConverter) that takes this source and viewport as parameters.

The frameConverter object dynamically allocates a new canvas element to be used as a frameBuffer (Please note that this element doesn't need to be inserted in the DOM).

As soon as the frameConverter detects that the video is playing in the video element, it starts the rendering loop that will perform the following tasks every 10 ms:

  • transfer the current video frame from the video element to the framebuffer canvas,
  • grab the frame data from the framebuffer,
  • apply the selected image effect to the frame data,
  • put the transformed frame to the viewport canvas.
Click on the image to see the Live Demo
comments powered by Disqus