Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
On our system we're seeing that any calls to
num_tokens_from_messages
is taking around 300-500ms consistently never mind the size of the message. Which is crazily slow for what it's doing. Upon checking a flamegraph we saw the model was being loaded each time so this attempts to fix that by using the singleton model instances and seeing if that can improve performance.I'm a bit sceptical on this as the mutex locking might just cause it's own issues but we'll see in our testing and maybe come up with a solution going forwards...
As an aside given the vocab is known up front for a tokeniser you should be able to just codegen a hashmap if you wanted hashmap inserts completely dominate the
num_tokens_from_messages
.Experiment in service of #81