Skip to content

Conversation

@kwokcb
Copy link
Contributor

@kwokcb kwokcb commented Nov 13, 2025

Changes

  • Add in option JSON option to allow users to format data as they see fit.
  • Add Markdown option to allow preview in Markdown supported locations such as Github repos.
  • Cleaned up HTML output to properly wrap and format + is responsive now (will fill if w<=0 specified)
  • Fixed bad memory read (exception) if you specify the same language for 3 diffs.
  • Add in "reduced" image option. The default size can be reduced to roughty 1/10 size if use original 512. (122 to 10 MB). and with 256 width to 3MB for after conversion to PDF.
    • Uses base64 encoding into JPG and embeds into json struct which can be stored directly in Markdown or HTML. This also means less disk read / writes for diff image generation.
    • Uses OpenCV + numpy if available for fast resize. PIL and OpenImage are much slower for this encoding case.
    • JSON / Markdown storage is about 4mb from 160K when embedding encoded images but is now self-contained.
  • Also replaced diff with OpenCV + numpy instead of PIL to avoid extra data copies.
  • This allows for adding in "difference method" option which defaults to RMS but can now be set to "COLORMOMENT" to use OpenCV CM hash / compare.

Output

Color Moment Hash option output

image

JSON, Markdown, HTML output

image

HTML results responsive (fills) vs packed.

  • Both scale properly on tablet and mobile now.
  • Packed is default as default width (w) is 256.
image

Embedding Example

Markdown embedding. This can be stored on any github repo without the original files.

image

PDF file with reduced images. Straight HTML->PDF save.
Test Results.pdf

- Add in option to filter out rows based on minimum RMS value.
- Add in option JSON, and Markdown options
- Cleaned up HTML output.
@jstone-lucasfilm
Copy link
Member

This looks really compelling, @kwokcb, and I'm looking forward to trying this out in the render test suite!

@jstone-lucasfilm
Copy link
Member

I like the added flexibility in the new HTML generation, though when I run the render tests through our usual PDF-generation process, the rendered images seem to be smaller on the page, with an extra white border around them:

MaterialXRenderTests_11_22_2025_GitHub.pdf

Do you think it would be possible to adjust the default HTML generation to restore the original size of the images on the page, without the additional white border?

Preserve the image size, border and spacing from original version.
…0 size if use original 512. (122 to 10 MB). and with 256 width to 3MB.

This makes easier to store comparisons images in PDF.
The idea is to use base64 encoding into JPG and embed it into the html.This also means less disk read / writes for diff image generation.
Uses OpenCV + numpy for fast resize. PIL and OpenImage are much slower for this encoding case.
JSON storage is about 4mb from 160K when embedding encoded images.
Replaced diff with OpenCV + numpy instead of PIL to avoid extra data copies.
@kwokcb
Copy link
Contributor Author

kwokcb commented Nov 25, 2025

@jstone-lucasfilm , @ld-kerley : This is about as far as I'm going to take this change for now.
The data structure built can hold whatever is desired but by adding 64 bit encoding it can be self contained and also produces much smaller output. I have switched to OpenCV since it's the fastest package I could find for this (without external dependencies).

(Jonathan, the formatting is as close to the original static tables as possible for html.).

@kwokcb
Copy link
Contributor Author

kwokcb commented Nov 25, 2025

Example run with small change to add hash compares.
hash_results.pdf

@ld-kerley
Copy link
Contributor

I've been playing around with this locally and the -e flag is hugely valuable for honing in on the issues - its all too easy to miss a failure in a sea of success.

Also really liking the markdown output!

I did have to install the opencv python module to get the diff to work - the old PIL code doesn't seem to be any sort of fallback to generate the images - but honestly It feels a LOT faster to me using openCV.

@jstone-lucasfilm
Copy link
Member

It sounds like this work has evolved quite a bit since my last test run, just before the holiday, and I'm looking forward to giving the latest version a try!

I have no objection to fully switching from PIL to OpenCV, though I'm curious as to whether we ought to remove the remaining PIL import code from the script, so that we're not depending on two image processing libraries in a single script.

Ultimately, our goal is to use OpenImageIO for all of this logic, but we can consider that a future improvement, perhaps timed with the proposed support for Color Moment Hashing in OIIO.

@kwokcb
Copy link
Contributor Author

kwokcb commented Dec 2, 2025

I can remove the PIL code. It is the slowest and also due to type conversion is less accurate and I think won't work with Exr files.

Definitely the aim is for consolidating library usage. I was thinking the next step is to decouple analysis tools from here, like we have the ability to pass a verification tool like what is done for the generateShader.py script. The default could be set to OpenImageIO.

Analysis data can be computed without comparison this way. E.g hashes can be stored per run, and the similarity test done as needed.

The other query beside color moment support is base64 encoding support in OIIO to avoid data copies conversion via numpy. It may be best to keep these as separate tool options as I can see this as being a client side (web) action as well. TBD.

@lgritz
Copy link
Contributor

lgritz commented Dec 2, 2025

The other query beside color moment support is base64 encoding support in OIIO to avoid data copies conversion via numpy.

What does this mean?

@kwokcb
Copy link
Contributor Author

kwokcb commented Dec 10, 2025

The other query beside color moment support is base64 encoding support in OIIO to avoid data copies conversion via numpy.

What does this mean?

Sorry for the poor phrasing. Just noting that the logic to create base64-encoded inlined image data would need to visited if using OIIO.

@kwokcb
Copy link
Contributor Author

kwokcb commented Dec 10, 2025

PIL support removed. Only uses OpenCV and numpy.

- Make OpenCV image utilities class and cleanup code.
- Add in "difference method" as RMS or COLORMOMENT.
- When OpenImageIO has color moment support the utility class may be externalized and passed as an option.
@kwokcb
Copy link
Contributor Author

kwokcb commented Dec 10, 2025

I don't have much more time to spend on this so I've yanked out an "image utils" class which currently uses OpenCV. As it's available I've also added in a "difference method" option to allow ColorMoment hash / diffs. RMS is still the default diff method.

print ("Failed to create image diff between: " + image1Path + ", " + image2Path)
print("- OpenCV and NumPy need to be installed for image diff and base64 for reduced image features to be supported.")

class OpenCVImageUtils:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If / when it's appropriate an OpenImageIOImageUtils class can be added.

def set_difference_method(self, method: str):
self.difference_method = method
if method == 'COLORMOMENT':
self.hash_function = cv2.img_hash.ColorMomentHash_create()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could add more hash comparisons here including ones which look for geometric details like MarrHildrethHash.

@jstone-lucasfilm
Copy link
Member

Thanks for all of these additional improvements, @kwokcb, and on my side I'd like to run our full GLSL/OSL render suite at reference quality, to see how the latest PDF compares with earlier versions.

@jstone-lucasfilm
Copy link
Member

@kwokcb Ok, I've had a chance to run reference quality render tests with the new code, and here's what I'm seeing:

Old Layout:
MaterialXRenderTests_Old

New Layout:
MaterialXRenderTests_New

Some issues that I notice with the new layout are:

  • Images are smaller than before on the page (i.e. their display resolution is reduced), with the white space at the top and bottom of the page becoming larger.
  • The text under each image is now left-justified instead of centered, which to my eyes seems less visually effective.
  • We've lost the "GLSL vs. OSL" text under the difference image.

What are your thoughts?

@ld-kerley
Copy link
Contributor

My thoughts here would be that the benefits gained in this PR far out weigh these minor formatting changes, and I would propose we merge this PR and if someone else feels these formatting changes need to be addressed that could be proposed in a separate PR.

The addition of the markdown output, and the error thresholding for the diffs is monumentally useful - I am using this every day

@jstone-lucasfilm
Copy link
Member

@ld-kerley Alas, we have many teams that depend upon the current output of our render test suite, so it's not quite so simple as merging these incomplete changes and fixing them after the fact.

I'd definitely support the idea, though, of splitting this change into individual components that can be reviewed and merged on their own.

Support for error thresholding, for example, is indeed useful, and would not break existing behavior for any users. Perhaps it's already available in this script, though, based on the earlier work in #2482?

@ld-kerley
Copy link
Contributor

If these many teams need the current output to be the same - perhaps we just host two different scripts.

I would really love to see this merged asap, so I stop having to pollute all my working branches with the cherry-pick across from Bernards work.

@jstone-lucasfilm
Copy link
Member

@ld-kerley From your perspective, what are the most important value adds from this PR, that are not yet available in the canonical script? Is the error thresholding in Bernard's version better than the original error thresholding contributed by Tomas?

Once we have a good sense of the important value adds, we can then split them out into separate PRs that make the same improvements without reformatting the standard layout that teams rely upon.

- Restore "vs" string
- Remove unwanted x-indent
- Center text
@kwokcb
Copy link
Contributor Author

kwokcb commented Dec 18, 2025

I fixed the following:

  • extra x padding
  • restore the "vs" language text (my mistake)
  • center text
    The size is responsive so there will be some minor spacing differences.

For "vs" color moment looks like this:

image

Copy link
Member

@jstone-lucasfilm jstone-lucasfilm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kwokcb!

There are some good ideas in this proposal, but I wonder if there are perhaps too many intertwined ideas for a single PR. If the focus of this PR is adding support for Markdown and JSON as alternative outputs, then why is the standard HTML output being reformatted at the same time? And is there any connection between these formatting ideas and the switch from PIL to OpenCV as the underlying image library?

My recommendation would be to simplify this PR as far as possible, e.g. focusing on the addition of Markdown and JSON output, and leaving the other changes to future PRs that can be reviewed independently.

If you need to wrap up your work quickly and move on to more important things, I completely understand, and I'd be happy to help to bring these ideas forward in collaboration with the community, so that you're not on the hook for additional work.

@kwokcb
Copy link
Contributor Author

kwokcb commented Dec 18, 2025

Not to get hung up in the “weeds”, I put up a overall issue with goals. This was meant to be step 1, though I put in the hash compare since it was trivial and exposes the error measure option. #2731 . Btw the html was a 1 hr hack from years ago, so not sure why anyone would be stuck on formatting as it’s mostly unreadable on non-desktop esp with multiple compares.

I’ll leave it up to you what you want to do with this.

@jstone-lucasfilm
Copy link
Member

@kwokcb Ah, now I think I'm understanding the source of confusion around this PR.

You're completely correct that this script was written quickly, way back in 2019, and there's plenty of room to improve it.

But in the six years it has been part of the MaterialX project, this script has become the source of our exemplar renders that we share with rendering teams and the MaterialX community, demonstrating the current state of our gold standard render output for each example material in each language.

Most commonly, these exemplar renders are shared as a standardized PDF that presents each of our example materials in a standard order, and because the formatting of these PDFs is so consistent, we're able to compare renders from years ago with modern renders, allowing us to track our visual progress or regressions.

This approach was used, for example, to visualize and measure the improvements when the OSL team added support for energy compensation in their specular BSDFs, with comparisons between old PDFs and new PDFs being a critical part of the validation process.

For these reasons, there's now a very high cost to changing the formatting of our exemplar PDFs, and the benefits of the corresponding changes would need to be extremely high to offset this cost.

For your current PR, since its main benefits are orthogonal to the formatting of the HTML output, I would propose that the best approach is simply to remove the HTML changes entirely, preserving all of the other functionality that will provide value to the community, e.g. Markdown output, JSON output, image library updates, and so forth.

This will provide the best of both worlds, where our exemplar renders from previous years will remain valid for comparison with new renders, and users will have access to all of the new features that you've proposed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants