Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shaderfilter Plus suffers from OBS internal audio buffer delays #12

Open
Stealthii opened this issue Dec 4, 2020 · 2 comments
Open
Labels
bug Something isn't working

Comments

@Stealthii
Copy link

When applying negative delay to audio sources in OBS, OBS will internally delay both video and audio mix buffers by the maximum negative delay amount, so that it has room to bring these components into sync with each other.

image

The video and audio monitoring in OBS does not show this delay, as it is applied post-compositing, pre-encoding.

Shaderfilter Plus accesses the mix buffers after this has been applied, so shader effects using FFT data generated from those mix tracks are delayed by the maximum negative offset. This results in a delay in the shader effect by that amount. As that shader effect is applied to video source in real-time, it is effectively delayed twice.

Here is an example video showing the result of the effect being delayed.

Ideally Shaderfilter Plus accesses these buffers before this delay has been applied, so that effects are rendered in real-time and not delayed twice.

@Limeth
Copy link
Owner

Limeth commented Dec 4, 2020

Thank you for the detailed explanation of the issue. To be frank, I had absolutely no idea you could even use negative delays, so it never occurred to me this would be a problem.

@Limeth Limeth added the bug Something isn't working label Dec 4, 2020
@Limeth
Copy link
Owner

Limeth commented Jul 15, 2022

I looked into this, and it seems like a difficult problem to solve. I assume video encoding is done immediately, whereas audio is synced up after an appropriate delay, and then both are combined and recorded/streamed. Hence it would be impossible to influence the video rendering/composition using the data from the resulting audio mixes.

However, there might be a compromise to be made: The Waveform plugin for FFT visualization (https://github.com/phandasm/waveform) uses audio sources instead of resulting audio mixes to gather audio data from. At that point, the audio data is not yet synchronized. If one were to use an audio source with no sync offset (no negative or positive delay), the audio data would be accessible to shaders immediately, which is what we need. The obvious disadvantage is that this solution provides no audio mixing capabilities -- you could only choose one audio source. It would be possible to implement mixing, but it seems like reinventing the wheel. Waveform does not provide mixing either.

From an implementation standpoint, obs_source_add_audio_capture_callback is used to capture audio from an audio source: https://github.com/phandasm/waveform/blob/master/src/source.cpp#L518-L548
As opposed to audio_output_connect: https://github.com/Limeth/rust-obs-plugins/blob/master/src/audio.rs#L490-L507

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants