I'm finding myself wanting to scale/blur/whatever images into an ndarray Array3. But imageproc functions all allocate their own output, which is suboptimal in many cases, for example when:
- the user wants to get outputs into some multidimensional array
- the same operation is performed repeatedly and output buffers could be reused instead of de- and reallocating in a loop
- the user needs the output in some special buffer handed to them by the kernel or a device like a GPU
I'm not sure adding do_thing_into functions for every function in the library is the best way to do this and using inout parameters is cumbersome compared to the current default, so some more generic way to make operations (somewhat) agnostic to where they shove their output might be appropriate.
I'm willing to put some time into this, but given that this would break literally every API in the crate I wanted to ask for some opinions before maybe investigating how this could be done.
I'm finding myself wanting to scale/blur/whatever images into an
ndarrayArray3. Butimageprocfunctions all allocate their own output, which is suboptimal in many cases, for example when:I'm not sure adding
do_thing_intofunctions for every function in the library is the best way to do this and using inout parameters is cumbersome compared to the current default, so some more generic way to make operations (somewhat) agnostic to where they shove their output might be appropriate.I'm willing to put some time into this, but given that this would break literally every API in the crate I wanted to ask for some opinions before maybe investigating how this could be done.