-
Notifications
You must be signed in to change notification settings - Fork 45
Configuration and initialization
The main MWEngine configuration is described in global.h first up there are a few compile time constants:
PRECISION indicates the resolution of the samples (in global constant SAMPLE_TYPE). If PRECISION is defined as 1, samples will be of a 32-bit floating point type float. When PRECISION is defined as 2, samples will be of 64-bit double precision floating point type double. Generally speaking, most Android devices have a negligible difference in CPU consumption between the two, but note that while 64-bit floating point gives better audio quality, it also consumes twice the memory.
RECORD_DEVICE_INPUT defines whether the engine should be able to record audio from the device. If this feature is not required, uncomment the definition. Note: recording input from the device requires both the android.permission.MODIFY_AUDIO_SETTINGS- and android.permissions.RECORD_AUDIO-permissions defined within the AndroidManifest. Also note that since Android M applications compiled with the targetSdk set to 23, the RECORD_AUDIO-permission is not automatically granted on installation and requires a Runtime Permission grant. Be sure to grant this permission prior to invoking engine.start() as the attempted opening of the audio input will cause a violent crash.
RECORD_TO_DISK defines whether the engine should be able to write its output onto the device's file system. If this feature is not required, uncomment the definition. Note: writing to disk requires the android.permissions.READ_EXTERNAL_STORAGE- and android.permissions.WRITE_EXTERNAL_STORAGE- permissions, which on Android M are not granted automatically. The read permission is required as writing a recording happens in snippets, which are temporarily stored on the system (thus required a read permission for the final concatenation). When attempting to write without the appropriate permissions the process will fail silently.
PREVENT_CPU_FREQUENCY_SCALING can be defined to keep the CPU busy during "idle time" in the render loop. This prevents the OS from reducing the clock speed of the processor which can cause audio glitches if subsequent render iterations require more resources. This is implemented in such a way that there is no visible overhead for your application, rather more stability and consistency in audio performance.
USE_JNI defines whether the engine should be used with JNI (so it can be used from Java). If the engine will be solely used from C++ you can uncomment this definition.
The other configuration values you can edit at runtime, but note that the render thread must be restarted when changing these. These configurations are :
namespace AudioEngineProps
{
unsigned int SAMPLE_RATE;
unsigned int BUFFER_SIZE;
unsigned int OUTPUT_CHANNELS;
unsigned int INPUT_CHANNELS
std::vector<int> CPU_CORES;
}
INPUT_CHANNELS describe how many inputs we can use for recording audio, this is likely to be either b0 (if you don't require audio recording) or 1 (to use the internal microphone of the Android device) in most applications. This property is unused if RECORD_DEVICE_INPUT is undefined to minimize overhead.
OUTPUT_CHANNELS describe how many output channels the engine will render. This value must be 1 at a minimum (mono output) and can be as high as you desire, but most applications will suffice with 2 channels for stereo output.
SAMPLE_RATE (in Hz) and BUFFER_SIZE (in samples) must be defined prior to engine start (see sequencercontroller.h's "prepare"-method) and cannot be fixed inside the compiled application as optimal sample rates and buffer size differ per device (either query the Android AudioManager from the Java SDK or use the methods in MWEngine.java to fetch these).
CPU_CORES describe a list of exclusive CPU cores that can be used to optimize thread performance (from Android N onwards). Use the android.os.Process class to retrieve these from the Java SDK, or use the optimizePerformance() method in MWEngine.java to automatically set these).
CHANNEL_CACHING describes whether the output of an AudioChannel and its ProcessingChain can be cached. This has the benefit that a static channel (i.e. repeatedly playing the same contents through the same processing chain) doesn't need to re-render its contents when looping, but can read from a cache. This frees up CPU resources for rendering other channels but comes at the expense of a higher memory consumption. Note that ProcessingChains that contain non-cacheable effects (for instance: delays or oscillated effects) will use the cached buffer until hitting the first processor that isn't cacheable (after which the processor will process the signal as usual). Only use this if you know what you are doing.
The audio engine runs as a daemon, in that it is continuously processing audio in- and output, even when there is nothing to render (for instance : because the Sequencer is actually paused). As such, the engine should be started inside a thread. All other operations (creation of instruments, addition of events, application of effects) must occur outside this thread to prevent glitches in output.
Note that when using a dynamic configuration for the buffer size and sample rate (thus overriding the values inside AudioEngineProps defined in global.h) you must first setup the engine like so:
void AudioEngine::setup( int bufferSize, int sampleRate, int amountOfChannels );
Where all arguments will override the corresponding values inside global.h, you can reference these values from AudioEngineProps throughout your application, they will have been updated.
The engine is started using the AudioEngine namespace (audioengine.h) and has the following public methods:
void AudioEngine::start();
void AudioEngine::start( Drivers::types driver )
Which should be invoked from inside a thread as it starts the render cycle. The cycle uses a ring buffer and callback mechanism, after the device has finished rendering the output of a single buffer, the callback is executed so the render loop can start its next iteration upon the newly enqueued buffer.
You can specify the driver to use for output (e.g. OpenSL or AAudio), though you must be sure that AAudio is available (requires Android 8 and up). When no driver is specified this will default to using OpenSL for widest support.
void AudioEngine::stop();
Stops the render thread. This will immediately halt all audio processing and output. Note that if you merely wish to pause your Sequencer, use the SequencerController, this will result in audio tails (such as echoes) to continue to play for their intended lifetime while new events will not be queued. You should only want to stop the thread when suspending the application, closing the application or changing environment setup (e.g. loading a "song" implying removal of instruments, processors and events to make room for new ones).
void AudioEngine::reset();
Resets the state of the engine (rewinds the sequencer and disables all recording states).
For subsequent control of the Sequencer, you must use an instance of SequencerController.
If you're intending to use the MWEngine for applications written in Java, the initialization is slightly different, as it is delegated to the nl.igorski.mwengine.MWEngine-class, which also maintains its own instance of a SequencerController.
Construct a new instance of MWEngine by referencing the app Context (e.g. main Activity) and passing an implementation of IObserver (see Monitoring state changes of the engine) :
MWEngine( Context context, MWEngine.IObserver observer )
This will also invoke the public initJNI()-method which prepares communication with the native layer code. If the bridge has been established, the static _ handleBridgeConnected()_-method of the MWEngine-class will be invoked.
It is recommended to optimize your application performance by calling:
static boolean optimizePerformance( Activity activity );
Which will automatically take measures to optimize the engine in the runtime environment (such as prevention of screen suspension, activating of sustained performance mode, and retrieving the CPU cores exclusive to the foreground application). Note not all of these are available on all supported platforms (some are available only on Android N and up). Note: you might choose to perform these actions outside of MWEngine to tailor these to the needs of your application).
To ensure the audio callbacks are executed at the sample rate and buffer size appropriate for the device, you can determine these values by calling:
static int getRecommendedSampleRate( Context context );
static int getRecommendedBufferSize( Context context );
Now you can proceed to prepare the engine's audio output (will set the runtime properties described in global.h):
void createOutput( int sampleRate, int bufferSize, int amountOfChannels, Drivers.types driverType )
where sampleRate is the desired sample rate in Hz, bufferSize the length (in samples) of a single buffer in the audio engine's ringbuffer queue and amountOfChannels specifies the amount of output channels to use (1 = mono, 2 = stereo). Remember : the smaller the buffer size, the lower the latency, but potentially the heavier it gets on the CPU. This method also creates a new instance of the SequencerController that can be retrieved via getSequencerController(). driverType specifies which driver to use (e.g. OpenSL or AAudio), note AAudio is only available from Android 8 onwards.
void start()
Creates a new thread, and starts the render cycle of the native layer AudioEngine. The application is now ready to output audio!
void pause()
Halts the audio rendering thread. This will immediately halt all audio processing and output. Note that if you merely wish to pause your Sequencer, use the SequencerController, this will result in audio tails (such as echoes) to continue to play for their intended lifetime while new events will not be queued. You should only want to stop the thread when suspending the application, closing the application or changing environment setup (e.g. loading a "song" implying removal of instruments, processors and events to make room for new ones).
void setAudioDriver( Drivers.types audioDriver )
You can allow the users of your application to select their audio driver at runtime and even switch while the engine is running.