-
Notifications
You must be signed in to change notification settings - Fork 45
Timing audio "events" in a musical context
When writing audio for use in a musical context, you're likely to think using paradigms such as bars/measures, 16th notes, quavers, etc. All of these paradigms are linked to the concepts of meter, time signature and tempo.
However, translating a musical piece consisting of two bars in 4/4 time at 120 BPM into the digital domain is a job that requires accurate positioning. This can be done either in buffer samples or in seconds (delta time), depending on preference. Under the hood MWEngine will always define a range in samples (as the Sequencer can query such ranges more easily).
Calculate the amount of samples necessary for storing the given length of time in milliseconds at the given sample rate (in Hz.).
millisecondsToBuffer( int milliSeconds, int sampleRateInHertz )
or
secondsToBuffer( float seconds, int sampleRateInHertz )
The above implies you're translating musical events in (milli)seconds (for instance: a sixteenth note at 120 BPM lasts for 125 ms). Under the hood, the engine will translate this to a sample accurate range by use of the following math:
int samplesPerBeat = ( int ) (( sampleRateInHertz * 60 ) / tempoInBPM );
int samplesPerBar = samplesPerBeat * timeSigBeatAmount;
int samplesPerSixteenth = ( int ) ( samplesPerBeat / 4 ); // 1/4 of a beat being a 16th
Note the concept of time signature (with timeSigBeatUnit being the "4" and timeSigBeatAmount being the "3" in a time signature of 3/4).
The way to write timed audio "events" into the output buffer is by keeping track of the "playback position' in your buffer callback, i.e. each time you write a buffer, you'll be incrementing the playback position with the length of the buffer. This is done by the Sequencer-class. So returning to a musical context: if you were to be "looping a single bar of 120 bpm in 4/4 time", when the playback position would exceed (( sampleRateInHertz * 60 ) / 120 (BPM) * 4 = 88200 samples, you reset it to 0 to "loop" from the beginning.
So let's assume you have two "events" of audio that occur in a sequence of a single bar of 4/4 time at 120 BPM. One event is to play on the 1st beat of a bar and lasts for a quaver (1/8 of a bar) and the other is to play on the 3rd beat of a bar and lasts for another quaver. These two "events" would have the following properties, for the first event:
int start = 0; // buffer position 0 is at the 1st beat/start of the bar
int length = 11025; // 1/8 of the full bar size
int end = 11025; // start + length
and the second event:
int start = 44100; // 3rd beat (or half-way through the bar)
int length = 11025;
int end = 55125; // start + length
The actual render loop (in AudioEngine) keeps an eye on the Sequencers "playback position" to determine which events are to be rendered in the current buffer rendering iteration.
If for instance the buffer size is defined as 512 samples per buffer, the second event in this example (that starts at offset 44100), will be rendered on the 86th iteration of the loop. To be precise : the "playback position" on this iteration will start at 44032 samples (and end at 44544), this means that for this event (buffer size 512 - ( start offset 44100 - playback position 44032 )) = 444 samples will be written during this iteration.
A further (event length 11025 - 444) = 10581 samples will be written until the 108th iteration of the render loop (when the Sequencer will reach "playback position" 55296, 171 samples after the last sample of the event has been written into the output buffer).
As you can see we're dealing with large numbers (and this is just for a single measure!) and endless iterations of the render loop. Luckily, the convenience math mentioned above and BufferUtility-class will ensure you don't need to consciously think about these figures when using the basic MWEngine classes to create an application.
For convenience, MWEngine comes with a BufferUtility which provides methods that transform time-based concepts into sample based values and vice versa. This utility can be used whenever you are introducing new custom code to the library.
In practice, it will likely suffice to use the basic methods provided within the AudioEvent class which allows you to directly define the playback range in either samples or seconds.
By understanding the sample based method you can quickly create your own custom helper functions to time audio events in a format of your liking, for instance floating point beats.