Skip to content
Igor Zinken edited this page Jan 3, 2024 · 9 revisions

Please read "Understanding MWEngine's core actors" first to understand how AudioEvents relate to the engine.

AudioEvents should be seen as value objects, describing an action of an instrument. For instance : the AudioEvents of a SynthInstrument are notes that should be synthesized individually, the AudioEvents of a SampledInstrument are pre recorded samples, etc. Events can start playing at any given moment (using start()/stop()) or can be sequenced to play back at a specific time.

When sequenced: an AudioEvent defines a moment in time when the instrument should do something for a given length of time. During playback, the Sequencer will request events within a specific range ("time slot") from the instrument the event belongs to. If the events lifetime is within the given range, the instrument will return the event so the Sequencer can make it audible in the audio engine.

Note that of the following classes, all those prefixed with "Base" should be extended prior to use, they describe and implement universal methods that should be shared between events of their type, but when used in an application you'll notice they remain frighteningly quiet!

BaseAudioEvent

The core class that describes the methods and interface for an AudioEvent (when referring to an AudioEvent, we're referring to a BaseAudioEvent or any of its inheriting classes.

constructor / destructor

BaseAudioEvent()

Creates a new instance of BaseAudioEvent. No buffer is available after construction (BaseAudioEvent is a core class and should be extended to be of any use within the MWEngine).

BaseAudioEvent( BaseInstrument* instrument );

Creates a new instance of BaseAudioEvent and instantly associates given instrument as the Instrument that will play back the audio event (after addition to the sequencer or when triggering live playback using play() and stop()). Also see setInstrument()

~BaseAudioEvent()

C++ destructor which will invoke dispose() (see below, especially for Java/Kotlin only usage).

public methods

void play()

Instant playback of the Event by adding it as a live event of its instrument (which means: scheduled for immediate playback!). For SampleEvents: when the Events sample has been played in its entirety (and hasn't been flagged as loopeable), it is automatically removed as a live event from the Sequencer. You can use this for live auditioning of the SampleEvents contents, or for creating Akai PMC style drum pad pummeling applications ;) For (Base)SynthEvents: the Event will keep playing until stop() has been explicitly called.

void stop()

Immediately halts playback of an Event that has been triggered by using aforementioned play()-method.

addToSequencer()

Adds the AudioEvent to its associated instruments event queue, so the Sequencer can make the event eligible for playback.

removeFromSequencer()

Removes the AudioEvent from the Sequencer.

int getEventLength()

Retrieves the length of the AudioEvent. More specifically, this is the length in samples the event is audible.

setEventLength( int value )

Sets the length of the AudioEvents audible duration to given value. Invoking this method will also set the eventEnd relative to the existing eventStart value.

int getEventStart()

Retrieves the offset at which the AudioEvent starts playing / becomes audible. This is relative to the Sequencers range. This value describes the position in samples.

setEventStart( int value )

Sets the offset at which the AudioEvent starts playing / becomes audible to given value. This offset is in samples, for positioning the event using musical concepts, see positionEvent().

int getEventEnd()

Retrieves the offset at which the AudioEvent stops playing / becomes inaudible. This is relative to the Sequencer.

setEventEnd( int value )

Sets the offset of the last audible sample for the AudioEvent, beyond this value the event stops playing / becomes inaudible. If an event should play in its entirety (f.i. playing all the samples in its buffer when dealing with a SampleEvent), invoking this method is unnecessary ( setEventStart() and setEventLength() will suffice ). If however only a subset of the full buffer contents should be played, you can set the events end to be lower than the full value ( eventStart + eventLength ). Also, for loopeable SampleEvents, event end can exceed beyond the value of ( eventStart + eventLength ).

Example : let's assume the Sequencer tempo is 120 BPM in 4/4 time. The output sample rate is 44.1 kHz, meaning a single measure holds 88200 samples. We have an AudioEvent whose audible duration should last for a quarter measure. It's eventLength as such should be ( 88200 / 4 == ) 22050 samples. The maximum read pointer index for this range is (( 88200 - 1 ) == ) 88199 (we subtract 1 because samples start playing at offset 0).

We'd like the event to become audible on the third beat of the measure. Its eventStart should then be at round( 88199 / 4 * 3 ) == ) 66149 samples.

We don't like the events audio to be cut off before having played for the total eventLength, as such its eventEnd should be at ( eventStart 66149 + eventLength 22050 == ) 88199 samples.

void positionEvent( int startMeasure, int subdivisions, int offset )

Positions the AudioEvent within the Sequencer using musical timing concepts. This results in strict "on the grid" timing, if you wish to user higher precision to create early / late drifting notes / swing, see setEventStart() and setEventEnd(). startMeasure describes in which measure the event starts playing (the 1st measure starts at 0), subdivisions describes the amount of "steps" we distinguish in each measure (e.g. 16 for 16 steps within a single measure), offset describes the offset at which to position the event as a subset of the amount of subdivisions. For example : positionEvent( 0, 16, 4 ) positions the AudioEvent one quaver into the first measure whereas positionEvent( 1, 32, 4 ) positions the AudioEvent at one eight note into the second measure.

void setStartPosition( float value )
void setEndPosition( float value )
void setDuration( float value )

Are the equivalent of setEventStart(), setEventEnd() and setEventLength(). Instead of defining a position in samples, you can use seconds here.

float getStartPosition()
float getEndPosition()
float getDuration()

Are the equivalent of getEventStart(), getEventEnd() and getEventLength(). Instead of returning a value in samples, a value in seconds is returned. NOTE: when adjusting the tempo in the Sequencer, all existing playback range values are updated by the same ratio.

float getVolume()

Retrieves the output volume of the event. An AudioEvent can have its own volume relative to the AudioChannel it is rendered in. By default, MWEngine will mix AudioEvents into the output at their specified volume (see mixBuffer()), but you can extend this logic in your inheriting AudioEvent-classes to for instance : switch samples (to treat volume as "velocity" in SampleEvents) or to synthesize audio at alternate properties (for instance : adding a percussive "snap" to loud SynthEvents and omitting this for events at a lower volume), etc.

setVolume( float value )

Sets the output volume of the event to value.

bool isEnqueuedForRemoval()

Whether the AudioEvent instance can be removed on the next Sequencer collection step (also see removeEvent in BaseInstrument). This is used by the Sequencer and there is no real reason to query this in your custom engine implementation.

enqueueRemoval( bool value )

Enqueue the event for removal from the Sequencer If boolean value is true, the next time the Sequencer requests events from the AudioEvent instance's associated instrument, it will check whether it is enqueued for removal and then remove it from the instruments queue, so it can be actually disposed safely. The reason of delegating this action instead of directly invoking delete on the instance is to overcome deletion of of the instance when it is in use, which will crash the engine, violently (for instance because it tries to read from the events buffer which has been deleted by the events destructor). This should not be invoked directly, but it is in fact handled internally when invoking stop() on a live event.

bool isEnabled()

Whether the AudioEvent instance is enabled. When false, the event will not be queried (by the Sequencer) to see whether it is eligible for playback.

setEnabled( bool value )

Enable / disable the AudioEvent instance for sequenced playback.

lock()

Locks the AudioEvent. This will set protected property locked to true, which should act as an instruction that the events resources are now being mutated. This can be used to prevent deletion of buffers which are currently being read from by the AudioEngine. Note that locking is only beneficial when shared resources / pointers will be deleted / allocated anew.

unlock()

Unlocks the AudioEvent. If protected property updateAfterUnlock is true, it implies that a request for a mutation of the events resources has been made during a lock. You can add logic in your inheriting class to handle upon this situation (e.g. performing a delayed updates of properties).

bool isLocked()

Whether the AudioEvent is locked. Queried by the engine prior to mixing in the events buffer contents. See lock().

void mixBuffer( AudioBuffer* outputBuffer, int bufferPos,
                int minBufferPosition, int maxBufferPosition,
                bool loopStarted, int loopOffset, bool useChannelRange )

Invoked by the Sequencer for sequenced AudioEvents that are currently audible, (e.g.. eventStart and eventEnd are within range of the current sequencer position as implied by bufferPos and min- & maxBufferPosition).

Given outputBuffer is the AudioBuffer (belonging to the AudioChannel associated with this events Instrument) that the event will mix its contents into. bufferPos describes the sequencers current playback position, and is queried against the AudioEvents eventStart to verify at which offset the buffer mix operation should occur. minBufferPosition and maxBufferPosition describe the Sequencers playback range, which are queried to determine whether the AudioEvent is audible because the Sequencer has started its loop and the event is at the opposite end of the loop range (see arguments loopStarted and loopOffset which describe this situation and the amount of samples that should be rendered from the range's start after the loop commences). Additionally, useChannelRange describes whether the event is part of an AudioChannel that has its own loop range, separated from the Sequencers range.

This function is what makes an event "play" when it is a sequenced event. Note that during execution the AudioEvent is locked.

void mixBuffer( AudioBuffer* outputBuffer )

Invoked by the Sequencer for non-sequenced AudioEvents that are currently audible, (e.g. "live events" whose playback is triggered in "note on" / "note off" fashion). The AudioEvents contents will be mixed into provided outputBuffer (belonging to the AudioChannel associated with this events Instrument).

BaseInstrument* getInstrument()

Returns the instrument that is responsible for playing back the event.

void setInstrument( BaseInstrument* instrument )

Sets the instrument that is responsible for playing back the event. The instrument has the responsibility of playing back the event using either the sequencer or instantaneously as a live event. You are free to assign an existing audioEvent to a different instrument at any time. Note: certain subclasses of AudioEvents require that appropriate Instrument subclass (e.g. SynthInstrument for a (Base)SynthEvent, SampledInstrument for a SampleEvent, etc).

AudioBuffer* getBuffer()

Returns a pointer to the AudioEvent instances buffer.

void setBuffer( AudioBuffer* buffer, bool destroyable )

Sets the buffer content of the BaseAudioEvent to given buffer. destroyable indicates whether the AudioEvent can destroy the buffer contents when the event itself is destroyed (not recommended when sharing the same buffer pointer over multiple events).

bool hasBuffer()

Whether the AudioEvent contains an AudioBuffer. A BaseAudioEvent contains no buffer after construction until manually setting the buffer.

void dispose()

When using Java/Kotlin: to be invoked when the event will be removed from the application, meaning: it should stop playing and will not be reused (otherwise, consider using removeFromSequencer() and pooling the event). You can break references to this event so it will be garbage collected. This will also dispose of the event buffer if it was managed by the event, note that this does not dispose buffers for SampleEvents, see SampleManager.

protected methods

destroy()

Invoked by the destructor. Can be overridden to provide custom cleanup behaviour in inheriting classes. By default, it will invoke protected method destroyBuffer.

destroyBuffer()

Will delete the buffer contents when the AudioEvent has its destroyableBuffer-property set to true (see protected properties).

public properties

bool isSequenced

Whether the event is sequenced, or is a "live" event that is made audible instantly.

protected properties

int  _eventStart
int  _eventEnd
int  _eventLength
int  _readPointer
bool _enabled
bool _removeMe
bool _locked
float _volume

Hold the values triggered by the public getters and setters, described above. By default an AudioEvent is enabled and its output volume is +1.0 (MAX_PHASE).

bool _updateAfterUnlock

Whether to perform an update operation when public method unlock() is invoked. This property can be set in custom update operations (for instance: you have an inheriting class and part of its update operation is a mutation to the buffer, in order to prevent disposing of memory that is in use, you can set this flag and postpone the operation in your custom override of the unlock()-method).

AudioBuffer* _buffer
bool _destroyableBuffer

Describes whether this event may invoke a delete upon its buffer when the event is destructed (by default this is set to true). Consider an example where the event is part of a synthesizer and thus likely manages its own buffer contents. It is acceptable (and desired!) for this event to delete the buffer contents when it is destructed. If the event was part of a drum machine, and its buffer would be a pointer to a shared resource (for instance a snare drum sound that is re-used in other events), the pointer to its buffer should not be deleted as the other events that point to this resource, would become unusable. In this situation manage the buffer deletion from outside this class.

BaseCacheableAudioEvent

A BaseCacheableAudioEvent is a convenient subclass of BaseAudioEvent that provides an interface allowing you to cache the contents of the event. If you're creating an extension of a BaseCacheableAudioEvent, you're likely to be creating an events that synthesizes audio, as SampleEvent s are cached by their nature!

A practical use case would be a synthesized event that uses expensive mathematical operations to generate a soundwave. Instead of re-rendering this content each time the event should be played, we can cache the first render and read from the cached buffer for all repeated read cycles, until the contents of the cache should be invalidated (for instance : because a property affecting the rendering soundwave changes) and rebuilt.

constructor / destructor

BaseCacheableAudioEvent();
~BaseCacheableAudioEvent();

Nothing inside these. Keep in mind that the base class BaseAudioEvent-destructor is executed as well upon deletion.

public methods

bool isCached()

Whether this event contains cached content.

setBulkCacheable( bool value )

Whether this event can be cached as part of a larger queue using the BulkCacher utility.

setAutoCache( bool value )

Whether this event should automatically cache its contents. When true, the rendering method of the event can query protected property _autoCache to check whether it should immediately write the rendered contents into the cache buffer.

cache( bool doCallback )

Starts a caching invocation. This can render the content "offline" (i.e. without having to immediately return the rendered content for playback to the AudioEngine). As such, this method can be invoked to prepare a particularly expensive events contents prior to playback (for instance : when the application is idle).

doCallback describes whether the event should inform the BulkCacher it has completed its caching when completed. This allows the BulkCacher to continue caching the next event in the queue.

resetCache()

Resets the _cacheWriteIndex (the progress of the cache routine) and sets protected property _cachingCompleted to false.

protected properties

bool _cancel

Can be queried during a render / cache routine to check whether caching should be cancelled (for instance: properties affecting the rendered content change during rendering of the cache).

bool _caching

Can be queried during a render routine to check whether the rendered samples should also be written into the cache buffer.

bool _cachingCompleted
bool _bulkCacheable

Properties used by the public getters described above.

int _cacheWriteIndex

A "write pointer" that tracks the caching progress. You can for instance cache when contents are requested (i.e. don't cache everything at once, but cache small snippets for each subsequent render invocation) and increment this index. When the index is equal to the buffers length (in turn equal to _eventLength) the caching sequence has completed.

Subclasses

Out of the box MWEngine provides a few sub classes to AudioEvents for different contexts, these are:

Clone this wiki locally