-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segment layering/blending #4550
Comments
Your proposed pipeline is more-or-less how I'd've expected WLED to be implemented, before I got in to the actual code. I'd thought of it more of "segment" renders to a texture buffer; textures are then rendered to the frame buffer (with grouping, spacing, any other stretching/scaling transformations going on here); and finally the frame buffer is mapped to pixels on the outputs bus(es). Memory costs aside: we've observed in prototype implementations that having the FX write directly to local texture buffers ( Lastly I think the pipelined approach will also make the code simpler and easier to follow. The key is breaking "Segment" down into the component parts -- the FX renderer, the texture->frame renderer, and the physical mapper can all be pulled into individual stages and components instead of packing everything into one giant class. |
Thanks. This is an interesting approach which indeed simplifies single segment processing (and reduces memory requirements) but will inevitably create blending into "frame buffer" more complex (imagine two partially overlapping segments with spacing and grouping set). I would leave frame buffer to physical mapping in the bus level logic but that would make anything but Cartesian coordinate system difficult to implement (mapping arbitrary 3D location of pixels into frame rendering logic, especially sparse set-up). But we can leave this mapping out of the scope of this discussion. |
In general I think this is a good approach!
just an idea: use a "mask" checkmark in segment settings to treat an FX as a transparency mask instead of using its colors, the mask could be R+G+B or even something more elaborate. |
By "frame buffer", I mean what you are calling a "canvas buffer": a single global buffer of all LEDs in the virtual space to which all segments are rendered, top to bottom, for each output frame (eg. show() call). I would expect that segment spacing and grouping would be best implemented as part of the segment->canvas render process -- if output canvas coordinates overlap from one segment to the next, you blend; if not, they'll end up interleaved as expected. Mostly I was trying to highlight that I had expected an implementation with the same sequence of concepts, but I'd've used different names -- I don't think there's any significant difference between what I described and what you proposed. |
RGBA at the FX level would be very practical if we're serious about considering blending, I think... If we really want to get radical, I'd also float the idea of struct-of-arrays instead of array-of-structs, eg.
This is the true crux of this approach. If we move to this architecture, the solution might be to explicitly develop a sparse canvas buffer abstraction. Sparse matrices have a lot of prior art in numeric processing code, I'm sure there's some insights there that could be helpful. |
If we introduce alpha channel (instead of W in effect functions) I would assume it only applies to that segment in regards to blending it with lower segment. Right? I do admit that it will open new possibilities when writing new effects, but none of current effects are written to utilise it so there will be no immediate gain. So, if I summarize what we've defined so far:
Caveats:
|
Yes -- I'd expect segment-level opacity to apply "on top" of the FX computed alpha channel, much the same way segment-level brightness applies "on top" of the FX computed colors. IIRC the computation is something like
Maybe
Bikeshedding a bit, but I'd probably put the canvas to bus functionality in its own free function(s) to start off with. Some
I think it's reasonable to flush everything (segments, transitions, etc.) and start clean if the bus configuration changes. If we can't re-allocate everything from a clean slate, the config is in trouble anyways...
One neat feature of this architecture is that transitions can be implemented as part of canvas composition -- each segment need only contain enough information for the compositor to know which pixels to draw. So I'd suggest trying a broad approach of "don't re-use segments or buffers at all" as the place to start.
From there we can explore progressively more sophisticated fallbacks to handle cases where memory is tight. Some ideas:
The segment object itself shouldn't be big enough to matter (compared to the pixel buffers), so we can just allocate new ones whenever it's convenient.
This is on my todo list already -- the render loop really needs to be mutexed with any source of segment state changes. Every platform we support has multiple tasks, even ESP8266! I believe this may be responsible for some crash cases in some configurations in the current code. I haven't had time to look at it yet -- either we want a new "render state lock", or we can expand the scope of the current JSON buffer lock to cover any case of reading/writing the core state. |
With 0.14 we've got proper blending of color and palettes across all segments, 0.15 improved on that and allowed effects to be blended as well. Recent blending styles (#4158) introduced effect transitions with different styles.
What is missing IMO is correct segment blending/layering with different blending modes known to Photoshop users, i.e. lighten, darken, multiply, add, subtract, difference, etc.
I've recently come across a snippet that I find useful and relatively simple to implement in WLED. Unfortunately it would require re-implementation of pixel drawing functions.
Let's first see the code and then discuss the nuances:
This assumes that
Segment::getPixelColor()
returns unmodified value set bySegment::setPixelColor()
during effect execution. To achieve that, each segment must maintain its own pixel drawing buffer (also known in the past assetUpLeds()
)Next it also assumes
WS2812FX
instance will maintain its entire canvas buffer (calledpixels[]
; similar to global buffer).The process with which segments/layers would be blended is:
This process does not handle color/palette blending (which does not change from current) or effect transition (from one effect to another) but only allows users to stack segments one atop another which would allow mixing of content of two segments even if effect function does not support layering.
The price is of course memory and (possibly) speed degradation as there will be more operations per pixel. However, segment's
setPixelColor()
/getPixelColor()
could be simplified and there would be no need forWS2812FX::setPixelColor()
.I would like to get some feedback and thoughts about layering and the implementation. An if it is worth the effort even if speed would be impaired.
The text was updated successfully, but these errors were encountered: