docs: Update frame buffer size for VisionCamera example #4105
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Just a tiny detail, it doesn't really matter much but I thought I wanted to clarify this a bit.
There's two formats in video processing; YUV and RGB.
RGB is always BGRA (4 bytes), and YUV is a bi-planar format with a Y plane of 1 byte per pixel, and UV plane of half the size of the image.
For 4k buffers, let's calculate the size of one frame:
While VisionCamera implements optimizations to trim the buffers and uses YUV or even compressed YUV whenever possible, almost 90% of the times people need to use RGB because the ML models just work in RGB.
So the exact number would be 33.177.600 bytes for a Frame, which is 1.990.656.000 bytes per second (or ~2 GB per second) of data flowing through the Frame Processor.
Thank to JSI it does not matter how big the data is, because we only pass references without making any copies or serialization - this is the part that should be highlighted here.