Skip to content

Tutorials: Creating a Metronome

jussi-kalliokoski edited this page Sep 15, 2011 · 3 revisions

In this tutorial, we’re going to create a metronome from a sample and sync it with some visual effects.

Part 1 - Playing a Sample

Audiolib.js has a built-in Sampler class that can process samples from wav files. In JavaScript, we can load audio files faster by embedding them into a webpage using DataURIs. To use this method we first need to base64 encode our wav file to a string. There are free base64 encoders available on the web that can encode wav files to strings. The base64 encoded string will be very long so it's best to load it from an external JavaScript file for readability. In the code below we are loading a base64 encoded hihat sample from an external javascript file that is assigned to the variable mySample. Then we decode the sample and create a Sampler instance to load and play the sample one time.

// Base64 encoded string loaded from an external js file as variable mySample
// Decode the sample
var mySample = atob(mySample);

var dev, sampler;

function audioCallback(buffer, channelCount){
    // Fill the buffer with the sampler output
    sampler.append(buffer, channelCount);
}

window.addEventListener('load', function(){
    // Create an instance of the AudioDevice class
    dev = audioLib.AudioDevice(audioCallback /* callback for the buffer fills */, 2 /* channelCount */);
    // Create an instance  of the Sampler class
    sampler = audioLib.Sampler(dev.sampleRate);
    // Load the sample to the sampler
    sampler.loadWav(mySample, true);
    // Trigger the sample
    sampler.noteOn(440);
}, true);

Part 2 - Looping a Sample

To loop our sample we can use the .addPreProcessing() function to execute code before every sample (in terms of sample rate) to check if it’s time to play the sample again. The speed of the metronome is defined by the number of beats per minute (bpm). The code below repeats the sample at 120 bpm and accents the first beat of each measure by playing the sample at a higher frequency.

// Base64 encoded string loaded from an external js file as variable mySample       
// Decode the sample
var mySample = atob(mySample);

var tempo = 120,
    notesPerBeat = 4,
    tickCounter = 1,
    tick = 0,
    dev, sampler;

function audioCallback(buffer, channelCount){
    // Fill the buffer with the sampler output
    sampler.append(buffer, channelCount);
}

window.addEventListener('load', function(){
    // Create an instance of the AudioDevice class
    dev = audioLib.AudioDevice(audioCallback /* callback for the buffer fills */, 2 /* channelCount */);
    // Create an instance  of the Sampler class
    sampler = audioLib.Sampler(dev.sampleRate);
    // Load the sample to the sampler
    sampler.loadWav(mySample, true);

    // The addPreProcessing() method is called before .generate()
    sampler.addPreProcessing(function(){
        // Make tickCounter approach 1, and trigger sample when reached
        tickCounter = tickCounter + 1 / dev.sampleRate * tempo / 60;

        if (tickCounter >= 1){
            tickCounter = 0;
            // Trigger the sample at e if first note, otherwise at a
            this.noteOn(tick ? 440 : 660);
            tick = (tick + 1) % notesPerBeat;
        }
    });

}, true);

Part 3 – Synching with Visual Effects

We can animate visual effects using a timer loop. These visual effects can be synched to the audio by calculating the current tick using the .getPlaybackTime() function. This .getPlaybackTime() function returns the current write position in samples and is used to determine how long the audio has been playing.

For the animation loop the code below uses Sink’s .doInterval() function which is an optimized timer comparable to setInterval(). This code updates the page title with the current beat.

// Base64 encoded string loaded from an external js file as variable mySample       
// Decode the sample
var mySample = atob(mySample);

var tempo = 120,
    notesPerBeat = 4,
    tickCounter = 1,
    tick = 0,
    fps = 60,
    dev, sampler;

function audioCallback(buffer, channelCount){
    // Fill the buffer with the sampler output
    sampler.append(buffer, channelCount);
}

window.addEventListener('load', function(){
    // Create an instance of the AudioDevice class
    dev = audioLib.AudioDevice(audioCallback /* callback for the buffer fills */, 2 /* channelCount */);
    // Create an instance  of the Sampler class
    sampler = audioLib.Sampler(dev.sampleRate);
    // Load the sample to the sampler
    sampler.loadWav(mySample, true);

    // The addPreProcessing() method is called before .generate()
    sampler.addPreProcessing(function(){
        // Make tickCounter approach 1, and trigger sample when reached
        tickCounter = tickCounter + 1 / dev.sampleRate * tempo / 60;

        if (tickCounter >= 1){
            tickCounter = 0;
            // Trigger the sample at e if first note, otherwise at a
            this.noteOn(tick ? 440 : 660);
            tick = (tick + 1) % notesPerBeat;
        }
    });

}, true);

Sink.doInterval(function(){ 
    // Get the tick we're at based on latency or zero if output hasn't been initialized yet 
    currentTick = dev ? ~~(dev.getPlaybackTime() / dev.sampleRate / 60 * tempo) : -1;
    document.title = (currentTick % notesPerBeat) + 1;
}, 1000/fps);