Skip to content

Conversation

@marc-fouquet
Copy link
Contributor

This is a preview of my proposed changes to the tone equalizer module. There is a lot of debug code still in it and there are known problems, see below.

This message only contains code aspects. I will write a detailed post from a user perspective for pixls.us.

Changes

  • Introduced post_scale, post_shift and post_auto_align settings, which allow adjusting mask exposure and contrast AFTER the guided filter calculation.
    • The histogram calculation is now split into two steps. First, a very detailed histogram is calculated over a large dynamic range. The GUI histogram and internal parameters used for automatically computing post_scale and post_shift are derived from this histogram.
    • The new parameters are not actually applied to the luminance mask, but to the look-up table that is used to modify the image.
    • With post_auto_align=custom, post_scale=0, post_shift=0, the results are the same as with the old tone equalizer (I was not able to get a byte-identical export, but in GIMP the difference between my version and 5.0 was fully black in my tests).
  • Changed upstream pipe change detection from dt_dev_pixelpipe_piece_hash to dt_dev_hash_plus after I noticed that the module constantly re-computed the guided filter, even though this was not necessary.
  • Added experimental coloring to the curve in the GUI, it now turns orange or red when the user does something that is probably not a good idea:
    • Raising shadows/lowering highlights with the guided filter turned off.
    • Lowering shadows/raising highlights with the guided filter turned on. The user probably expects a gain in contrast here, but the guided filter will work against this.
    • Setting the downward slope of the curve to be too steep.
  • UI changes:
    • Sliders (previously on the "simple" page) are now located in a collapsible section beneath the histogram.
    • Made the histogram/curve graph resizable (see issues!).
  • In my efforts to understand the code, I renamed things that were named confusingly in my opinion (i.e. compute_lut_correction to compute_gui_curve) and sorted the functions. The consequence is that I have touched almost all lines of code, so diffs will not be helpful in tracking my changes.

Known issues

Tbh., I had more problems with the anatomy of a darktable module (threads, params, data, gui-data, all the GTK stuff) than with the actual task at hand.

Known problems are:

  • Pulling the histogram/curve graph small causes DT to crash. I am clearly still missing something here. There is also no horizontal bar on mouseover to indicate that the graph can be resized.
  • When post_auto_align is used to set the mask exposure, the values for post_scale and post_shift are calculated in PREVIEW and used in FULL. However, other pixel pipes (especially export) calculate the mask exposure on their own and may get a different result that leads to a different output.

Things I noticed about the module (a.k.a issues already present in 5.0)

  • Resetting the module multiple times makes the histogram disappear until the user moves a slider.

Related Discussion

Issue #17287

@marc-fouquet
Copy link
Contributor Author

The detailed explanation is here now: https://discuss.pixls.us/t/tone-equalizer-proposal/49314

@MStraeten
Copy link
Collaborator

macos build fails

~/src/darktable/src/iop/toneequal.c:1343:94: fatal error: format specifies type 'long' but the argument has type 'dt_hash_t' (aka 'unsigned long long') [-Wformat]
 1343 |     printf("toneeq_process PIXELPIPE_PREVIEW: hash=%ld saved_hash=%ld luminance_valid=%d\n", current_upstream_hash, saved_upstream_hash,
      |                                                    ~~~                                       ^~~~~~~~~~~~~~~~~~~~~
      |                                                    %llu
1 error generated.

@wpferguson
Copy link
Member

wpferguson commented Apr 6, 2025

Does this preserve existing edits?

Just imported some existing tone equalizer presets. Applying them has no effect, therefore I would believe this version isn't backward compatible and will break existing edits.

@@ -126,35 +128,60 @@

DT_MODULE_INTROSPECTION(2, dt_iop_toneequalizer_params_t)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to update the version here to support old edits - legacy_params() only gets called if the stored version is less than the version given here (and new edits would get confused with old ones without the version bump).

@ralfbrown
Copy link
Collaborator

Does this preserve existing edits?

Looks like a missing version bump - the code is present to convert old edits, but darktable doesn't call it since it thinks they're still the current version.

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First very quick review (important as otherwise this will break old edits).

Not tested yet.

n->quantization = 0.0f;
n->smoothing = sqrtf(2.0f);

// V3 params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please keep this as new_version = 2. The legacy_params() will be called multiple time until we reach the last version. The rule here is that we never have to touch old migration code, we just add a new chunk to go 1 step to final version.


const dt_iop_toneequalizer_params_v2_t *o = old_params;
dt_iop_toneequalizer_params_v3_t *n = malloc(sizeof(dt_iop_toneequalizer_params_v3_t));

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since all your new fields are at the end of the struct, just do:

memcpy(n, o, sizeof(t_iop_toneequalizer_params_v2_t));

n->quantization = o->quantization;
n->smoothing = o->smoothing;

// V3 params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And keep only this section below:

@marc-fouquet
Copy link
Contributor Author

Thanks for your feedback. I will provide a new version in a few days which includes the changes that were suggested here.

Some advise on my two major roadblocks would be helpful:

  1. The auto-alignment (post_scale/post_shift) is calculated in PIXELPIPE_PREVIEW and I would like to use the results in all pipes. It is easy to get this data to the main window (PIXELPIPE_FULL) by storing it in g. However I also need a way to apply the same values to exports (which don't have g). Since the guided filter is not scale-invariant, the results would differ somewhat if I re-calculated the alignment with pipe-local data during export.
  • Does it make sense to store this data in p (from process running PIXELPIPE_PREVIEW) using fields that are not associated with GUI elements? These fields could be floats that are initialized to NaN and replaced with real values when they are available, commit_params would copy them to d.
  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?
  1. The problem with resizing the graph. I have only done two things:
  • In gui_init
  g->area = GTK_DRAWING_AREA(dt_ui_resize_wrap(NULL,
    0,
    "plugins/darkroom/toneequal/graphheight"));
  • and in _init_drawing
  g->allocation.height -= DT_RESIZE_HANDLE_SIZE;
  • After that, resizing the graph works, but no handle is displayed and it is possible to drag the graph too small, which results in a crash.

@paperdigits
Copy link
Contributor

@TurboGit please note there has been a lot of discussion in the thread on pixls. Please take that into consideration.

@AxelG-DE
Copy link

I am late to the party and I will also keep silent again for private reasons IRL.

From my perspective, I do not have many issues with ToneEQ.

There is only one thing bothered me from day-one of this module:

  • mask exposure/contrast compensation sliders are on masking tab
  • histogram is on the advanced tab
  • For precise attenuation one needs to forward/backward jump between those two tabs.

I pretty much dislike and had long argues with the author Aurtélien Pierre by its time. He kept trying to convince me that it is not doable differently for this and that reason (as he usually behaves). That bar-indicator on masking tab did not guide me as precise as he always pretended it would be.

Nowadays I have mapped those two sliders to two rotaries on my midi-board (X-touch mini) and I can attenuate while looking at the advanced tab.

The luminance estimator / preserve details is another thing slightly above my head but from time to time I use it.

All the rest, I barely touch

After setting the mask, I hover the mouse on my image and scroll the wheel (for this I sometimes need to off/on the module as the rotaries seem to mess with the module-focus).

For me the “simple” tab can just be hidden totally. Besides, please do not clutter the advanced tab.

I hope my workflow (above) will not be destroyed and nor the old edits.

In other words: Never change a running system
Thank you!

@TurboGit
Copy link
Member

@paperdigits : Yes I've seen and followed a bit the discussion, but it became so heated that it has almost no value to me at this point so I don't follow it on pixls anymore. We'll see if part of it is moved here in a PR.

@wpferguson
Copy link
Member

Existing user defined presets no longer work.

As for the included presets, whenever I apply one the histogram contracts to take half of the horizontal space and moves to the left edge. If I move the histogram back where I want it and apply a preset again the same problem occurs.

@wpferguson
Copy link
Member

Showing the mask with preserve details shows no difference between no, EIGF, and averaged EIGF.

Masks are vastly different between tone equalizer on current master and this PR.

New masks...

new 1
new 2
new 3
new 4
new 5

versus current master

old 1
old 2
old 3
old 4
old 5

@jenshannoschwalm
Copy link
Collaborator

  • Does it make sense to store this data in p (from process running PIXELPIPE_PREVIEW) using fields that are not associated with GUI elements? These fields could be floats that are initialized to NaN and replaced with real values when they are available, commit_params would copy them to d.

  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?

Ad 1) Nope, you must not use the parameters for keeping data while processing the module. We have the global module data, you might use that to keep data shared by all instances of a module. But you'd have to implement some locking and "per instance" on your own. Unfortunately we currentlx don't have "global_data_per_instance" available.

Ad 2) Yes, exports and the cli interface.

@marc-fouquet
Copy link
Contributor Author

@wpferguson Are you sure that the settings were the same?

The version in this PR had the bug with the version number introspection, so it did not convert existing settings correctly. It is probably better to wait for a new version to test this.

@marc-fouquet
Copy link
Contributor Author

  • Are there cases in which an image has never seen a GUI at all, but still needs to process in a pipe correctly?

Ad 2) Yes, exports and the cli interface.

So it probably does not make sense at all to depend on values that come from the GUI during export.

The core problem is the scale-dependence of the guided filter. An alternative approach for exporting would be to create a downscaled copy of the image (sized like the GUI preview), apply the GF, get the values I need and then apply them to the the full image - essentially simulating the preview calculation. Not the most efficient approach, but it would only be needed during export when auto_align is used.

@rgr59
Copy link

rgr59 commented Apr 12, 2025

Not sure I understood the last post correctly, but if I did, in my opinion there would be a problem.

Firstly, for the CLI case, where there is no GUI, how can the GF computation be done on a downscaled image sized like the GUI preview? But also if there is a GUI, I think the export result must not depend on the arbitrary size of the darktable window (and thus the preview size) at the time the export is done. (Later exports of the image with unchanged history stack and same export settings must always yield identical results.)

@jenshannoschwalm
Copy link
Collaborator

So it probably does not make sense at all to depend on values that come from the GUI during export.

Definitely not, it won't generally work.

The core problem is the scale-dependence of the guided filter.

I didn't check /review your code in it's current state but are you sure you did setup correctly? There are some issues but we use feathering guide all over using masks and results are pretty stable to me.

About keeping data/results in per-module-instance, this has daunting me too on some other project. Will propose a solution pretty soon that might help you ...

@marc-fouquet
Copy link
Contributor Author

Firstly, for the CLI case, where there is no GUI, how can the GF computation be done on a downscaled image sized like the GUI preview? But also if there is a GUI, I think the export result must not depend on the arbitrary size of the darktable window (and thus the preview size) at the time the export is done. (Later exports of the image with unchanged history stack and same export settings must always yield identical results.)

As far as I understand it, the preview thread calculates the navigation image shown in the top left of the GUI. However the actual image that the thread sees is much bigger (something like 1000px wide), so the navigation image must be a downscaled version. I hope (but have not yet checked) that the size of this internal image is constant.

@marc-fouquet
Copy link
Contributor Author

I didn't check /review your code in it's current state

The code in the PR is outdated and has known problems, not much use looking at it now. I will update it as soon as I have a somewhat consistent state.

but are you sure you did setup correctly? There are some issues but we use feathering guide all over using masks and results are pretty stable to me.

Of course it is possible that I might have broken something, but as far as I am aware, I did not change anything about the guided filter calculation but only modified what happens with the results.

About keeping data/results in per-module-instance, this has daunting me too on some other project. Will propose a solution pretty soon that might help you ...

This sounds nice, but my next attempt will be trying to avoid this.

@ralfbrown
Copy link
Collaborator

Note that your code can figure out how much the image it has been given has been downscaled in order to determine scaled radii and the like to simulate appropriate results. A bunch of modules with scale-dependent algorithms do this. It isn't perfect, but does yield pretty stable and predictable results.

Look for piece->iscale and roi_in->scale in e.g. sharpen.c and diffuse.c. Most modules access these in process(), but it looks like tone equalizer actually accesses and caches this info in modify_roi_in().

@marc-fouquet
Copy link
Contributor Author

I have been playing around with the tone equalizer some more and ran into a bit of a roadblock.

To recap, what I do is:

Image => Grayscale + Guided filter => Mask => Mask Histogram => Percentile values => Auto Scale/Shift values

The buffers in the different pipelines (PREVIEW with GUI and the export) have different sizes and the guided filter is scale-dependent, so when I make statistics over the mask, it is expected that there is a systematic error. The final calculated values deviate so much that there is a visible difference between the image that is shown in DT and the export.

My idea to overcome this was as to downscale the image during export to the same size as the preview and use the downscaled version to calculate the statistics (essentially simulating the preview pipe during export). However the results were still different even though the calculation was done with images of the same size and with the exact same parameters.

Then I added debugging code to write the internal buffers into files and I discovered the reason:

github

  • The image on the left is a crop from the input of the PREVIEW pipeline.
  • The image on the right is from the export pipeline. The input buffer was downscaled to the same size as PREVIEW using dt_interpolation_resample. I tested the different interpolation types, this example uses DT_INTERPOLATION_BILINEAR, which should in my understanding be the least sharp option.

However the left image (PREVIEW) is still a lot blurrier than my downscaled version. I would have expected them to be mostly the same.

Using a blurry version of the image is not ideal for my purposes. However the bigger problem is that I need to know, what happened to the image in the preview pipe, if I want to replicate the same steps in the export pipe.

I tried to find relevant parts in the DT code, but had no success so far. I am also interested in the calculation of the size of the PREVIEW pipe image, it seems like the height fluctuates the least and is between 880 and 900 pixels.

@ralfbrown
Copy link
Collaborator

The demosaic module downscales if the requested output dimensions to the following module are small enough. FULL is almost certainly getting more data out of demosaic than PREVIEW, which reflects in the sharper image after downscaling. Run with -d perf to see the image sizes being passed along the pixelpipe.

@marc-fouquet marc-fouquet force-pushed the 2025-04_toneequal_preview branch from 7c707c2 to 64fc0de Compare May 1, 2025 08:32
@marc-fouquet
Copy link
Contributor Author

I finally have a version that I consider good enough to show it publicly. It would be nice if someone would take the time to look at my code, i.e. there are a few "TODO MF" markers with open questions.

The module should be (if there are no bugs) compatible with old edits. I have checked that with parameters "align: custom, scale: 0, shift: 0" the results are the same as in 5.0.1.

Most of my changes were not that much effort. Scaling the curve after the guided filter, coloring the curve, changing the UI (even though it still has a few issues) was not that difficult. The one thing that was hard and got me stuck for weeks is the auto alignment feature:

  • If requested by the user, the module should determine the mask exposure and contrast automatically.
  • These values should automatically adapt to upstream image changes.
  • The result should be the same during GUI operations and during export.

The data that is available to the pixelpipes during GUI operations is different from the export. The FULL pipe may not see the whole image, so it is not suitable to calculate mask exposure/contrast.

The PREVIEW pipe sees the image completely, but it is scaled-down and pretty blurry. The guided filter is sensitive to both of these effects, so statistics on the PREVIEW luminance mask deviate significantly from statistics of the whole image.

The (unfortunately not so nice) solution this problem is to request the full image in PIXELPIPE_FULL when necessary. Of course this has an impact on performance. However in practice I found it acceptable and it only occurs when the user explicitly requests auto-alignment of the mask - so users who use the module like before should not experience performance degradation (unless I accidentally broke something, i.e. OpenMP).

Known issues:

  • UI graph resizing is still broken. It is possible to drag the graph too small and crash the program.
  • In the auto-align case, the UI needs both PIXELPIPE_PREVIEW and PIXELPIPE_FULL to be completed to draw the histogram. If one is missing (which can easily happen) no histogram is drawn. (Even in 5.0.1 there are similar situations, i.e. no histogram ist drawn after resetting the module twice.)
  • My version has been forked from master a while ago. I have looked through the changes on master and applied most of them. The folders for the default presets are still missing, but these are not a problem. The upstream change detection (hash) has been modified, in my version I have also noticed that the original code did not work correctly, but my fix is different.

Other notes:

I have seen #18722 and to be honest I am not sure what it does exactly, but I wondered if it would be possible to give modules like tone equalizer access to an "early stable" version of the image - a version from after demosaic, but before any modules that users typically use to change the image. If this was possible, I would switch the calculating the luminance mask to this input, if the user requests auto-alignment, so re-calculating the mask would not be necessary that often.

Providing modules with access to an early version of the image would also have other use-cases, like stable parametric masks that don't depend on the input of their respective module.

@TurboGit
Copy link
Member

TurboGit commented May 3, 2025

However in practice I found it acceptable and it only occurs when the user explicitly requests auto-alignment of the mask

And only once when the auto-align is changed, right?

  • UI graph resizing is still broken. It is possible to drag the graph too small and crash the program.

Indeed, you can even resize to negative size of the graph :)

  • The folders for the default presets are still missing, but these are not a problem.

Indeed, we need back the hierarchical presets naming. Should be easy.

I have seen #18722 and to be honest I am not sure what it does exactly, but I wondered if it would be possible to give modules like tone equalizer access to an "early stable" version of the image - a version from after demosaic, but before any modules that users typically use to change the image. If this was possible, I would switch the calculating the luminance mask to this input, if the user requests auto-alignment, so re-calculating the mask would not be necessary that often.

A question for @jenshannoschwalm I suppose.

During my testing I found the auto-align a very big improvement indeed. I would probably have used the fully-fit as default or maybe the "auto-align at mid-tone". BTW, since the combo label is "auto align mask exposure" I would use simplified naming for the entries:

  • custom
  • at shadows
  • at mid-tones
  • at hightlights
  • fully fit

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some changes proposed while doing a first review.

To me the new ToneEQ UI is nice and better than what we have in current master.

I would suggest to address the remaining minor issues (crash when resizing the graph) + the proposed changes here and do a clean-up of the code to remove dead-code and/or commented out code. Probably also removing the debug code and I'll do a second review.

From there we should also have some other devs testing it, such drastic changes may raise some issues with others. Again to me that's a change in the right direction.

n->quantization = 0.0f;
n->smoothing = sqrtf(2.0f);

*new_params = n;
*new_params_size = sizeof(dt_iop_toneequalizer_params_v2_t);
*new_version = 2;
*new_params_size = sizeof(dt_iop_toneequalizer_params_v3_t);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is wrong, we do a one step update from v1 to v2. As said previously we don't want to change older migration path.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As said previously, this change should be reverted. We want to keep the step migration from v1 to v2.

@TurboGit TurboGit added this to the 5.2 milestone May 3, 2025
@TurboGit TurboGit added the priority: medium core features are degraded in a way that is still mostly usable, software stutters label May 3, 2025
@jenshannoschwalm
Copy link
Collaborator

@AxelG-DE @marc-fouquet

I couldn't really follow the internal changes and discussions but the last example with the dark ring around the sun might pinpoint to a principal problem of your approach, when and how do you apply the guided filter.

@marc-fouquet
Copy link
Contributor Author

@AxelG-DE @jenshannoschwalm This is the inverted brightness effect from my very first post on the forum (Pixel A should be brighter than B, but the downward slope of the curve is so steep that B is brighter than A). This is also what the red curve warning is for.

I have played around with a similar image, it is possible to run into the same effect on 5.0.1.

TE has always produced various artefacts when pushing things far and these kind of sunset pictures do this. I will continue keeping an eye on this, but at the moment I don't think that this is an indication of a bug in the module.

@s7habo
Copy link

s7habo commented Jun 14, 2025

@marc-fouquet The TE now works as expected. Thanks for the quick fix!

TE has always produced various artefacts when pushing things far and these kind of sunset pictures do this. I will continue keeping an eye on this, but at the moment I don't think that this is an indication of a bug in the module.

Yes, I can confirm that. That was one of the reasons why I don't like to have too much variation in the curve.
A less steep and/or linear slope gives the best results.

grafik

I have a suggestion in this regard. I just need to think carefully about how best to formulate it.

@AxelG-DE
Copy link

I also think, this is not a bug. From my (user's) point of view, it just comes from the not.squeezed.histogram :)

If I start with fully-fit first, then switch to legacy and do not change the compression, I am rather sure, I will get the same things.

(actually I found another funny thing. when lifting the shadows in fully-fit and then move to the sun. the mouse-cursor showed that striped overexposure indicator, but there was no triangle on the histogram yet)

@marc-fouquet
Copy link
Contributor Author

(actually I found another funny thing. when lifting the shadows in fully-fit and then move to the sun. the mouse-cursor showed that striped overexposure indicator, but there was no triangle on the histogram yet)

Thanks, I have added this to the list of things to look at.

I also just found one more issue. The red warnings do not react correctly to legacy contrast boost.

@s7habo
Copy link

s7habo commented Jun 14, 2025

What works much better in this version of TE than in legacy is the mask contrast/ scale histrogram function.

@marc-fouquet You have offered different modes for these purposes. Full fit, at shadows, at highlights at mid-tones and custom.

Wouldn't it be possible to have only one mode instead of these different “focus” modes for mask contrast, where you have an additional slider next to mask brightnes and mask contrast to set the fucrum for mask contrast dynamically?

Let me illustrate this with an example. Here I would like to darken the highlights:

grafik

I now turn on the TE, it centers the histogram and we have an additional slider with which we can control the fulcrum of the mask contrast. This is displayed, for example, with a vertical line in the histogram (here both in red):

grafik

Now I darken the highlights and move the mouse over the area I want to protect from darkening (which should not be changed):

grafik

Now I put the fulcrum right there and increase mask contrast, which gives me even better darkening of highlights:

grafik

In this way, we only need one mode and three sliders: for mask brightness, mask contrast and mask fulcrum.

@marc-fouquet
Copy link
Contributor Author

I like the idea. I will have to think about the details.

@AxelG-DE
Copy link

That was one of the reasons why I don't like to have too much variation in the curve.
A less steep and/or linear slope gives the best results.

BTW: I did those only to test and clarify. It was clear that high dynamic range is the most challenging. Usually I would have raised the exposure first, at least half of what is needed and then use filmic to compress the highlights 😄

@marc-fouquet
Copy link
Contributor Author

Just a quick note that I am still working on this. May take a few more weeks.

@marc-fouquet
Copy link
Contributor Author

My progress has been somewhat show because of real life. I am now in a more productive phase, but there is more real life on the horizon. Also whenever I work with the code, I get more ideas about what I would like to change.

But right now the solution to the mask/histogram alignment problem works the way I envisoned it, so I am uploading this version, even though I consider the overall work to be only 1/3 done.

Disclaimers

  • Don't use this on your production library. While it may convert images from current DT correctly, futher changes to the data format are very likely and you might find yourself in a position where resetting your edits is the only way out.
  • This is NOT compatible with my earlier version! You will have to reset edits that were made with my old version.
  • Don't look at the code, it is a mess right now.
  • I did not merge upstream changes. My code is very different from master, I will have to look at merge conflicts near the end of development.
  • There are thousands of opportunities for bugs many of which can make the output worse in subtle ways (which has happened here before). To make it clear, the goals of a bug-free version are:
    • Old edits look the same as before (see discussion about diferences below), as should edits that exclusively use the old controls and leave new controls in neutral positions.
    • Edits that use exclusively the new controls should look similar to the old ones. However they will deviate somewhat.

How the new version works / features to test now

  • I have removed all the magic features that did stuff automatically without user interaction. My old code contradicted the DT philosophy too much and was confusing for users. Unfortunately this means there seems to be no perfect solution for the mask exposure/contrast/histogram alignment problem that works completely without user interaction. This is due to the way the pipelines work in darktable.
  • Instead there now is the align button, the first UI element on the first UI page, which should align your histogramm with a single click.
    • Alignment settings are saved in the module parameters. This is why they are also shown in the UI.
    • Ctrl-click resets alignment to 0.
    • Your own mask exposure/contrast modifications are applied on top of the alignment. If an upstream module changed the image, you can just click "align" again, your custom modifications are preserved.
    • Custom controls include the "pivot" that Boris suggested.
    • The intended casual use of the module is: Click "align", then change the curve. Click "align" again when the upstream image has changed.
  • New control: global exposure
    • This works like an extra exposure module after tone equalizer. It was added because it is common to change the exposure while working with TE and the default exposure module (which is before TE) will mess up TE's mask alignment with each change. It is also very cheap computationally, cheaper than an extra exposure instance.
  • New control: scale curve vertically
    • I often find myself in a situation where I am satisfied with the shape of the curve, I just want the changes to be a bit stronger or weaker. The scale control adresses just that.
    • While it is possible to do edits of up to +/- 4EV with scaling, this is not recommended as strong changes with TE tend to introduce various artifacts.
    • This control makes the 3 variants of each default preset (soft/medium/strong) obsolete. My intention is to deliver only one version of each preset in the final version and ship a wider selection of differently shaped curves instead.

Known issues

  • Curve scaling makes the curve go out of bounds of the drawing area.
  • The orange/yellow color coding about curve steepness is sometimes wrong.
  • New options on the masking page don't work yet, because their functionallity has not been implemented.

Image differences to current master

If you compare identically processed images from this version and current master, compute the difference in gimp and amplify the result with curves, you will see that something like 5% of the pixels differ in one RGB component and a few even in two components. The reason for this:

  • TE does not apply the curve to each pixel immediately. Instead it calculates a LUT from the curve and then applies this LUT to the pixels for better performance.
  • This LUT used to be 80.000 entries interpolated by "nearest neigbor".
  • I have changed it to 8k entries interpolated linearly.
  • In my tests the new LUT was both faster and and represented the (relatively flat) curve more accurately. But it is a change and therefore the images not identical.

Future work

All items on the future work list are highly experimental. I want to implement them, check if they work and throw them out again if they are not useful.

  • Luminance Estimator changes:

    • The luminance estimator is responsible for turning the image into a greyscale mask, judging the brightness of each pixel.
    • This has always been an artistic choice disguised as a technical one. The current estimators produce different results and there is no right or wrong choice, but they don't mean much to people who want to edit photos and do not care about math.
    • I would like to add a "custom" mode with RGB sliders equivalent to the "Grey" tab in the Color Calibration module, so photographers can control in an intuitive way what is perceived as "bright" by tone equalizer.
  • Targeted mask contrast:

    • In current TE it is possible to influence mask contrast and exposure as a pre-processing step before the guided filter.
    • I want to add an option to add contrast around a specific luminance.
    • The hope is that this makes it easier for the guided filter to pick up specific difficult edges.
    • This may prevent some halos, but it may also introduce halos on its own. Must be implemented to see if it is useful.
  • Alternative Curve:

    • The most common complaint about TE is the wobblyness of the curve. Usually this has no visible effect on the image and it is really hard to find a different curve that matches all the requirements of the module.
    • I want to try something easy and pragmatic, like a catmull rom spline or linear segments + corner cutting. This will be less smooth and therefore probably produce artifacts faster (so it is unlikely to become the new default), but may enable gentle edits without wobble for people who care about this.
  • Also I want to add some special code for HQ mode.

@marc-fouquet
Copy link
Contributor Author

Pushing this version because I won't have much time to work on it in the next few weeks.

Above disclaimers still apply, but in his version all UI elements do something and the main functionality seems to be working. The added custom luminance estimator and the alternative curve are quite OK.

Not satisfied with the "targeted mask contrast" feature. This needs different math or be removed entirely.

@TurboGit
Copy link
Member

@marc-fouquet : Thanks, just a note that I'm still looking at this. Wanted to test but we have a conflict resolution to fix. Can you do that? I would really want to see this in the next release, do you think that something working could be made before end of October? This will let some time for testing.

@marc-fouquet
Copy link
Contributor Author

Right now I don't have access to my dev machine. I know that my branch is far behind master, I will fix this when I have the time. For now, can you just test my branch as it is without merging anything?

This version still needs a lot of polish and testing.

There also is this targeted mask contrast thing, which is a side-activity and not the core of my contribution, however right now it is implemented but does not work correctly and I am not sure what to do with it.

Merging this for the Christmas would be... ambitious. I would rather postpone it by one more release, to make sure that everything is correct and also that the community is OK with my version.

@TurboGit TurboGit modified the milestones: 5.4, 5.6 Nov 6, 2025
@marc-fouquet
Copy link
Contributor Author

Just pushed my current state. Functionally close to where I want to be, but the code is still a mess.

@TurboGit
Copy link
Member

TurboGit commented Dec 2, 2025

Nice to see this moving forward. There is some conflicts to be resolved, can you rebase this PR on top of current master? TIA.

My plan, if you agree, is to test this and merge it early 2026 to ensure we have some field testing before the 5.6 release.

@marc-fouquet
Copy link
Contributor Author

  • My plan is to provide a version that is in an OK state and merges with master within two weeks.
  • After that I want to take 4-6 weeks to explore another tangent (I have one more feature idea).
  • After that it is just testing, cleanup and stabilizing.

@tpinfold
Copy link

tpinfold commented Dec 2, 2025

I wish you the best of luck with your attempts to improve the tone equalizer module. For my part the way the sliders for the histogram adjustment have been moved into the Advance tab in DT 5.3 has been a great improvement. However, one frustration that remains for me is that the auto pickers generally fail to succeed at picking the optimal position for the sliders. I hope that your work will also improve that function. I also hope to see your version soon being tested in DT 5.5. Thanks for your efforts.

@marc-fouquet
Copy link
Contributor Author

State of the module:

  • Everything that I have implemented to far and that I don't consider a failed experiment is included in this commit.
  • Some cleanup has been done.
  • It hopefully merges with current master (it was a bit of a fight with git today).

I don't not consider it "done" since I have one idea for a bigger change that I want to try when I have time during the holidays. But if people are curious, this is the release to try.

Caveats:

  • I have been struggling heavily with GTK, I will need some help with the GUI code before this can actually be merged. Besides not being confident with this in general, there is a concrete issue: I have added buttons that hover above the histogram drawing area and those don't behave right (show wrong tooltips, miss clicks).
  • We have to discuss in more detail how suitable it is to let a module modify p without user interaction on upstream changes.
    • Normally the module does not do this any more.
    • However there are undocumented hidden auto-align features that can be activated by right-clicking either one of the align buttons.
    • I think this is useful, especially for users who want to edit many images quickly with styles.
    • However it comes with potential problems that users need to be aware of (i.e. automatic adjustment needs the darkroom mode to work, it will not trigger when copy-pasting settings in lighttable view and then immediately exporting the image).
  • Arrangement of elements on the masking page is not final.
  • This has to be tested carefully, bugs are very likely.

@paperdigits
Copy link
Contributor

The last commit seems to touch so much stuff without reason. Why is that?

@marc-fouquet
Copy link
Contributor Author

Hmmm.

Thing is, I have been working on this for months without pulling changes from master. Yesterday I had a hard time with git merging it. Maybe it would be easier if I started a new PR branch?

@TurboGit
Copy link
Member

TurboGit commented Dec 8, 2025

@marc-fouquet : Not needed, it seems to me that all changes pushed here is your work. There is multiple commits, if you want you can squash them or I'll do it when merging anyway. I'll try to test later today.

@marc-fouquet
Copy link
Contributor Author

A quick overview of what was changed compared to the old tone equalizer:

  • Alignment Page

    • Buttons "fully align" and "align exposure/shift" do a basic alignment of the histogram. These are the replacement for the magic wands (the old controls were moved back to the masking page). For manual work, "fully align" is recommended, for styles/presets "align exposure/shift". This base alignment is stored in p, the concrete values are accessible in the tooltip.
    • Manual alignment shift/scale sliders are now applied on top of the base alignment. This has the huge advantage that the manual alignment is not lost when the user has to re-align on upstream changes.
    • Histogram scale pivot: The pivot point for scaling, based on an idea by Boris. This point is marked in the drawing area with a little ^ symbol.
  • Exposure page:

    • Global exposure does the same thing as an exposure module AFTER tone equalizer (so it does not mess up the mask).
    • Scale curve vertically: Make the change stronger or weaker, up to a factor of 2.
    • Curve type: I added catmull-rom splines as a pragmatic less wobbly option.
    • Curve smoothing for catmull rom: Basically 0.5 is the optimal value. Smoothing is mirrored at 0, so -0.5 is the same as 0.5 BUT positive values get more flat at the borders of the graph (-8 and 0), while negative ones continue the slope.
  • Masking page:

    • Luminance estimator:
      • This has always been an artistic control, just hidden behind mathematical terms.
      • I added a custom setting, that allows to mix the grayscale mask like in color calibration. This allows i.e. to darken bright areas, where brightness is defined by looking at the red channel only.
      • I also added Rec. 709 weights, because I was surprised that there as no option that just weights green stronger than red and blue.
  • Drawing area/graph:

    • If the image was aligned with one of the align buttons and there are upstream changes, an orange circle-arrow will appear. Clicking this we re-click the last used align button. I expect that people will spend most time on the exposure page, so this way they can re-align without changing to the alignment page.
    • 2/4 buttons: Make a bigger vertical EV range visible ("Scale curve vertically" can make the curve 4EV high).
    • Histogram style:
      • Toggles between linear, logarithmic and "linear ignore border bins".
      • "linear ignore border bins" is the most useful. Linear can become very small when a lot of pixels are outside the EV range. I don't like logarithmic because it makes everything look the same and is generally only useful for people who are comfortable with the concept of logarithms.
    • When High Quality Processing is activated (darkroom bottom bar), the drawing area gets an LQ/HQ button. This toggles between the usual approximated histogram (PREVIEW pipe) and the real histogram. It is only a UI change, the HQ histogram is never used internally to calculate anything.
  • Presets:

    • Presets no longer have soft, medium, strong variants, since those can be emulated using "scale curve vertically".
    • There are new "Compress Shadows/Hightlights v3" presets.
  • Hidden Auto alignment:

    • Either one of the align-buttons can be right-clicked. This will make it automatically update the alignment on upstream changes and store the new values in p. There is a message each time this happens.
    • This will only trigger in darkroom mode, not when copy-pasting a history stack in lighttable mode.

One suggestion mentioned in the past was auto-aligning once on activation of the module. I have decided against this, since I fear that it might mess things up for people who want to process multiple images exactly the same (i.e. for focus stacking, panoramas,...). Auto-aligning affects, which pixels in the image are changed based on statistics over the image content. Normally this is desired, but I in those mentioned edge-cases it might cause errors in later stages (i.e. fusing images does not work right).

@wpferguson
Copy link
Member

Presets no longer have soft, medium, strong variants

Presets get used in automations, so removing them just forces the user to recreate them so that their automations continue to work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation-pending a documentation work is required feature: redesign current features to rewrite priority: medium core features are degraded in a way that is still mostly usable, software stutters release notes: pending scope: image processing correcting pixels scope: UI user interface and interactions

Projects

None yet

Development

Successfully merging this pull request may close these issues.