Feasibility of custom low-level widgets rendering to wgpu directly? #199
Replies: 5 comments 4 replies
-
The Kludgine design does indeed to CPU based tesselation, but it tries to be smart when it can. For #134, I don't plan on pursuing shaders and only using Kludgine and I'm very confident performance will be great. Kludgine allows caching the tesselation -- at the point that you have a For Cushy, embedding custom rendering integrations is going to be tricky because the prepare -> render flow requires that there is a prepare() phase which has exclusive/mutable access to the data, but the render function binds all rendered data to the Instead, Cushy uses Kludgine's But, this adds a new challenge: how does a widget in Cushy add commands to the To answer the final questions:
The graphics context provides access to the
wgpu has support for a scissor rect, which clips the drawing operations being performed. Kludgine uses this to implement clipping, and Cushy uses Kludgine's clipping support. |
Beta Was this translation helpful? Give feedback.
-
Thanks for sharing your thoughts!
It is probably just a question of how many things one throws at it 😉 Of course, the machine learning / data science community has a number of visualization/plotting frameworks that work well with small to medium sized data. However, I'm running more and more frequently into situations where I need to graph things that go beyond what they can handle. Often I start plotting something using good-old matplotlib, and sooner or later I end up attempting to pan/zoom a plot where re-rendering takes ~10 seconds, ouch. So my quest is to come up with something that scales to a few orders of magnitude more, and first experiments indicate that going for optimized shaders may be needed to get there.
That sounds promising. So that would mean that e.g. scatter plots would not have to re-buffer, because their markers (shapes) only need to be nesselated once, and can all be rendered in a single draw call? But what about line segments? For plots, zooming is typically done separately in x/y dimensions so the "shape" of line segment basically changes all the time, and would have to be re-tesselated, right?
Interesting, you mean the challenge is that the The aspects of providing the information and clipping actually sound promising. |
Beta Was this translation helpful? Give feedback.
-
Too true 😆
I would love for you to explore this. I think it would be a huge benefit to have something like this for Cushy. I just wanted to also reassure you that a lot of use cases will probably perform quite well when someone who knows how Cushy/Kludgine's rendering works takes some time to take advantage of its features.
Correct! But even better, all sequential non-textured or same-textured shape drawing calls are grouped together into a single drawing operation. And with Cushy's integer-based coordinate system, vertices are automatically merged between drawing operations.
Line segments are tessellated as polygons as well. Everything is a triangle before it gets sent to the shader.
Currently, yes. But the shader that takes a
It's a couple of challenges -- first, Widgets are stored in an Arc/Mutex abstraction, which means to actually invoke functionality on it, the widget must be locked. This allows the The latest wgpu allows you to use an unsafe function to tell wgpu to forget the lifetime, I believe, but Cushy and Kludgine didn't benefit from adopting this, and it would make it possible to write unsound code accidentally in any custom wgpu Widget. This conversation has started giving me some ideas on how we might be able to box up a trait that provides the prepare/render functions and the information that Kludgine collects for the rendering and transforms. This should allow virtually any wgpu workflow possible to be included in a widget. Even when a separate render pass is needed, a widget can always render into a texture and just draw the texture. |
Beta Was this translation helpful? Give feedback.
-
I just want to mention briefly that I've been exploring replacing lyon, and I may be trying to move curve rendering to the shader in the process. The easiest path is to just convert the curves to line segments before tessellation, but I like the idea of keeping the geometries simpler. If I were able to move the curve drawing to the GPU, would that mostly alleviate your concerns? I still want custom shader support regardless, just wanted to mention this because if so I'm more tempted to explore it. |
Beta Was this translation helpful? Give feedback.
-
It depends on how it would behave. The question is: Would zooming in/out affect the width of the line width or does the line render at a fixed width in pixel/clip space? Both behaviors have their use cases. Plotting in general wants things like markers/lines to be transform agnostic, i.e., markers/lines should have the same size/width of X pixels irrespective of the transform (constant size under zooming). This is unlike typical vector graphics semantics where zooming in is typically expected to enlarge everything. So if the tesselation already applies the width in "domain space" and the shader simply multiplies each vertex position by the transform, one ends up with traditional vector graphics semantics not well suited for plots. To counter the effect of e.g. a zoom in, one would have to re-tesslate with a smaller width in domain space. However, if one tesselates them with zero width in domain space, the vertex shader could first transform each vertex from domain space to pixel space, apply a half-width displacement there to achieve fixed width lines. |
Beta Was this translation helpful? Give feedback.
-
The Kludgine documentation mentions that it is inspired by wgpu’s Encapsulating Graphics Work. This made me wonder if it would be generally feasible to forward this architecture into Cushy itself, enabling to write custom widgets that render to wgpu directly?
Motivation
I'm currently exploring the possibilities of using Cushy for interactive data visualization purposes (the goal is basically to have something better than
matplotlib
in terms of rendering performance and UX). The plotters integration goes in the direction of the functionality I need. However, I assume it has some fundamental technical limitations, because it is not really capable of re-rendering dynamically e.g. when panning and zooming. I guess under the hood plotters will re-rasterize the entire image for such "view range" changes.In principle this can be solved much more efficiently when having direct access to the shaders, because panning/zooming basically comes down to just modifying a uniform -- no need for re-buffering the entire vertex data if done correctly. Also there are some nice tricks that can be done in the shaders with specific optimizations for plotting (for instance plot markers like circles/squares/triangles can use techniques as described in Antialiased 2D Grid, Marker, and Arrow Shaders). I had implemented some prototype of these ideas in Godot (+ Rust), and would be curious if I could migrate them to Cushy.
Note that leveraging Kludgine is probably not really working in this case: For instance, as far as I can see Kludgine is doing line tesselation on the CPU, i.e., zooming/panning would probably result in re-running the tesselation, which is what one wants to avoid for performance reason (I'm was using a shader based tesselation technique similar to what's described here).
Is it doable?
Integrating raw wgpu rendering into Cushy probably raises some design questions. I'm just brainstorming here a little bit. I'm not quite sure if these ideas make much sense, because I haven't really understood the Cushy's/Kludgine's rendering system yet.
I noticed that the interface described in the Encapsulating Graphics Work generally limits doing everything in a single render pass. This is probably reasonable, but also means that the client code has to live with the provided render pass (i.e., things like multisampling are controlled "externally"). It probably also means that custom widgets need to stick to certain rules/convention imposed by Cushy. Things that aren't really clear to me yet:
Would be curious to hear your thoughts on this!
Beta Was this translation helpful? Give feedback.
All reactions