Skip to content

Conversation

@clang-clang-clang
Copy link
Contributor

This fix #795.

The changes in b951bf5 added TimeTransform to DisplayDefaults, which accesses GetGpuSupportsHighPrecisionFragmentShaders() to configure default TimeTransform.

CommandBuffer::GetGpuSupportsHighPrecisionFragmentShaders()

AFAIK, due to differing gl initialization flows between the Apple platform and Android (less familiar with other platforms), the former's setup may occurs at

[Rtt_EAGLContext setCurrentContext:self.context];
.
In contrast, Android's gl initialization is completed on the Java side, prior to the C/C++ Runtime initialization. The Apple platform, potentially due to context not been setup, may then access incorrect values.
This explains why Android is unaffected, while iOS misjudged highp support, so this PR addresses the issue by deferring the TimeTransform initialization of DisplayDefaults.

I'm not entirely familiar with the recent changes to TimeTransform, but I believe lazy loading could be one solution. Alternatively, perhaps the functionality of Display itself shouldn't be called within DisplayDefaults, which is part of the Display construction flow. In other words, is it possible to set a default value without relying on a highp check?
Hi, @ggcrunchy, please investigate if there's a more suitable approach? Thank you.

@ggcrunchy
Copy link
Contributor

Argh, didn't occur to me that would cause issues.

Honestly, if you're using the default behavior the midp results are probably perfectly fine, 50 seconds or whatever it seems to be. I think I just took a guess and thought "these are probably okay" for the numbers. 😄 (Just "not choppy" in the worst case and not immediately repetitive otherwise.)

@clang-clang-clang
Copy link
Contributor Author

clang-clang-clang commented Nov 14, 2025

I agree with you, but because the early check gets the wrong support boolean causing the shader to just not work.

The primary concern is that the highp check is a one-time operation that gets cached. If it's performed too early, it might not be able to retrieve the correct value. Therefore, would it be better to bypass the highp check and provide a default fallback value (applicable across all precisions, perhaps 1 in earlier versions)?

@ggcrunchy
Copy link
Contributor

Oh, sorry if it wasn't clear. That's what I meant. 😄 Just use a "safe for mediump" value.

One second would be a pretty low range, too noticeable I think, but mediump between [2^5, 2^6) should still guarantee 1/1024 * (2^6 - 2^5) = 1/1024 * (32) = 1/32 second accuracy at the end of the range—so about 30 FPS granularity, not too choppy. Could push it up to 64 - 1/1024 seconds, I think, or just something "nice" like 63.875, to get the longest possible period.

by set a default value without relying on a highp check
@clang-clang-clang
Copy link
Contributor Author

😄 Just use a "safe for mediump" value.

Got it. Thank you for your patient explanation.
So I followed the previous 50 and simply removed the highp check, and the sample Graphics/GeneratorViewer didn't show any problems.

I need more time to digest precision, frame rate, and jitter avoidance. @~@

@clang-clang-clang clang-clang-clang changed the title Core: defer default TimeTransform init Core: fix iOS highp misjudge by set a default value without relying on a highp check Nov 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

iOS - object.fill.effect = "filter.hue" doesn't work anymore

2 participants