Data Race fix when using Seed callback with multiple threads #8300
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
While working on an example for threading using wolfssl, I found that when using seedCb a data race occurs.
Since setting the seedCb is just setting a function pointer to a global this means a simple mutex is not enough.
For example:
You have Thread A and Thread B. Both Threads have different ways they want to have a seed(for whatever reason) SeedA and SeedB
Right now if Thread A sets seedA callback first and Thread B sets seedB callback after. Thread A has the potential to be using seedB for its RNG if ThreadA has not completed all of its RNG calls before ThreadB sets SeedB. This then leads to a sort of data race where that race is getting to the thread's desired function before it is overwritten.
This PR attempts to alleviate this issue by allowing an override macro
WC_MULTI_THREADED_CALLBACKS
, currently as of posting this fix is only setup for posix threading.Sources on functions for Posix Threads
https://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_key_create.html
https://pubs.opengroup.org/onlinepubs/7908799/xsh/pthread_setspecific.html
There is potential to make this more portable but we would need to implement a ThreadId key pair and then all the cleanup that is associated with that sort of mechanism....