Skip to content

RFC: const ergonomics for NonZero<T> #3786

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

sandersaares
Copy link

@sandersaares sandersaares commented Mar 7, 2025

The std::num::NonZero<T> type allows non-zero integer semantics to be clearly expressed. Yet this
type is only seamlessly usable if all APIs with non-zero semantics use this type, due to required
to/from conversion at any API boundary that differs in its use of NonZero.

The burden of these conversions is especially heavy in tests and examples. This RFC proposes new
coercions to facilitate implicit conversion to NonZero from integral constants, simplifying usage
in tests and examples, where succinctness and readability are paramount.

Rendered


When writing test or example code and using hardcoded constants, you can omit the conversion into
`NonZero<T>` - it is done implicitly at compile time. This only works with constant values (either
`const` variables or literals like `1234`).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think only literals (or negated literals) should automatically give you a NonZero<T>, not const items. otherwise you end up with fragile code due to a lack of type safety if the const value could be changed to zero later (e.g. it's measuring the size of a file at compile time and you decide that that file should be empty for whatever valid reason).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Thinking out loud, the problem is exemplified by:

// constants.rs
const MIN_REDDIT_POSTS_USED_FOR_TRAINING: u64 = 100_000;

// loader.rs
fn load_posts(min_count: NonZero<u64>) { .. }

load_posts(crate::constants::MIN_REDDIT_POSTS_USED_FOR_TRAINING);

This coercion relationship would be iffy here because the person authoring/touching the constant could reasonably want to consider 0 a valid value. Yet the person authoring load_posts reasonably might design an API that cannot be called with zero. If one day this constant becomes 0, the breakage would be surprising.

If the author of MIN_REDDIT_POSTS_USED_FOR_TRAINING wants zero to be considered invalid, they could simply make it:

// constants.rs
const MIN_REDDIT_POSTS_USED_FOR_TRAINING: NonZero<u64> = 100_000;

Thinking about the compatibility/adoption angle here - there may be constants that already exist today that are not NonZero in the type system (due to issues such as the ones that motivate this RFC) but are logically non-zero, in which case the type of the constant would need to be updated to benefit. Which seems right and proper?

I will update the RFC to only apply to literals.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed on literals only. Once you have a value, that value has a type, and it shouldn't change to another type (with a minor asterisk for subtyping). This is about how a type variable becomes a concrete type; it doesn't change what happens once something already has a type.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Negative literals are not actually literals and my PR making them so was rejected. rust-lang/rust#136860 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

negative literals are very much a thing semantically: https://play.rust-lang.org/?version=stable&mode=debug&edition=2024&gist=12c08cf39ffe1d13e47f9e4726a725e6
-{128i8} will panic in debug mode, whereas -128i8 won't.

Copy link

@AaronKutch AaronKutch Mar 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That reminds me of how they made the decision to inherit the ridiculous possibility of overflowing literals from other languages, invalid representations in a given bitwidth should always have been a completely unconditional error that can't be allowed by any feature flag and must be an existential impossibility after literal-to-value conversion time (in fact, I'm sure there is some weird edge casing with this that would have to exist if we implemented this RFC properly, we should fix this and the parsing issue over an edition, and only import or allow the RFC trait to work in this new edition). The way negatives are currently handled will necessarily lead to a confounding layer on top of that and this RFC.
Is there really necessarily a reason why negatives have to be parsed separately and could we fix it across an edition? Crudely speaking (I don't know the precise terminology and semantics), if a - appears before a string leading 0-9, then we can associate that sign with that expression as a unified literal (and maybe we enforce that there are no spaces between them), except when the - also appears after another plain expression, in which case it is always interpreted as subtraction. Or stated differently, a block with {expr-128i8} is always interpreted as subtraction of an expression by a literal unless expr is nil in which case the - is always part of the literal. {--128i8} just results in an invalid - left hand side expression. Alternatively, having a space or not after - could be made significant. Could this happen across an edition or would this mess too much with the procedural macro cross-edition compatibility situation?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went looking in the compiler for where it handles negative literals, afaict it happens when translating the unary negation operator from HIR to THIR, it just looks at the translated-to-THIR argument and if it's a literal, merges the negation into the THIR literal. So, that looks through parenthesis (since they disappear in THIR), but shouldn't look through any other operations, hence why -{128i8} errors since braces do more than just whatever their inner expression does, they also copy/move that inner expression.

https://github.com/rust-lang/rust/blob/70237a8cb905fc48ed6148d2f2dba756535a5940/compiler/rustc_mir_build/src/thir/cx/expr.rs#L568-L569

so, there aren't negative literals in all compiler stages before THIR (they're just a negation operator and a positive literal), but THIR and later stages there are explicitly negative literals.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Except for patterns, which have negative literals from the HIR on and hopefully soon from the AST on

assert!(!item_fits_exactly_in_packaging(3));
assert!(item_fits_exactly_in_packaging(25));
assert!(!item_fits_exactly_in_packaging(999));
assert!(item_fits_exactly_in_packaging(1000));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this feature work if the user has written this?

assert!(item_fits_exactly_in_packaging(1000u32));

i guess no because the u32 suffix forces the literal to have type u32 rather than any possible integers.


and what about literals with const-folding

item_fits_exactly_in_packaging(123 | 456);
// should produce `NonZeroU32::new(123).unwrap() | NonZeroU32::new(456).unwrap()`

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent questions! Applying such a coercion in these cases does seem to raise more problems than the benefit it may bring (if any). I will amend the RFC to clarify that the proposed behavior here is quite conservative (unless further discussion in here brings up reasons to make some bolder moves).

For the 1000u32 case, the proposed behavior would be that the coercion is applied only if a single integer type can be identified for the coercion:

  • If the literal is untyped (e.g. 123), the T must be unambiguously resolved from the target type.
  • If the literal is typed (e.g. 123u32), the T in the target type must either match or be inferred to the literal's type.

For the const-folding case, I believe we can have the behavior be conservative by having the coercion apply only to literals - if the source type is from an expression like 123 | 456 then we simply do not consider NonZero coercion as a valid candidate.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the source type is from an expression like 123 | 456 then we simply do not consider NonZero coercion as a valid candidate.

As a current-editions behaviour I totally agree, but one Future Possibility that might be interesting to consider would be allowing rust to combine 123 | 456 to a 579 literal, and then apply the coercion to that.

That way you could write out whatever with literals and the compiler would just evaluate it like python does (using infinite precision if needed) then only converting back to a "real" rust type once it's given a specific type.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That way you could write out whatever with literals and the compiler would just evaluate it like python does (using infinite precision if needed) then only converting back to a "real" rust type once it's given a specific type.

If we ever have literals convert to user-defined types, I would argue that Rust should not do operations to combine literals before converting to the user type, since that user type may not use standard arithmetic, e.g. if it is for carry-less arithmetic (or equivalently operations on $GF(2)[X]$), where 0b101 + 0b111 == 0b010 and 0b101 * 0b111 == 0b11011. So, if Rust tried to do arithmetic on literals before conversion, it would produce the wrong result.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do support

let x: Gf2x = 0b101 + 0b111;

I'd say it would be surprising to some portion of users whether x == Gf2x(0b10) or x == Gf2x(0b1100).

The former is essentially how Rust currently infers the integer literal type, as evidenced in:

let x: i8 = 1000 - 999;
// error: literal out of range for `i8`, `#[deny(overflowing_literals)]` on by default
// linted on both "1000" and "999"
// suggesting we are treating this as `1000_i8 - 999_i8`.

Since we can't guarantee an operator being homomorphic for a custom type, indeed the conservative stance is the best, in which we only allow conversion on $:literal (with arbitrary number of parentheses).

The question is whether -$:literal should be allowed, because using the same argument the impl Neg for G may be non-standard such that G(-x) != -G(x)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The question is whether -$:literal should be allowed, because using the same argument the impl Neg for G may be non-standard such that G(-x) != -G(x)

Part of why the API I proposed has negation separated out is that Rust does treat negation specially:

pub const A: i8 = -0x80i8; // no error
pub const B: i8 = -(0x80i8); // no error, arguably a bug in rustc
pub const C: i8 = -{ 0x80i8 }; // error

so, if -MyType(x) isn't the same as MyType(-x) the conversion from literal to MyType can see that the literal is negated (because that's passed into the literal conversion rather than calling Neg) and do the appropriate value transformation.

# Prior art
[prior-art]: #prior-art

Exploration of other languages suggests that while refinement types like `NonZero` are common, they
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in terms of user-defined literals there is quite a big prior art not mentioned here which is C++11 (of course C++ also has implicit constructors and exceptions so they could have non_zero<uint32_t> v = 123; even without UDL.)

back to Rust, we could for instance introduce a const trait

pub trait /*const*/ FromIntLiteral: Sized {
    const fn from_int_literal(input: u128) -> Self;
}

impl<T: FromIntLiteral + ZeroablePrimitive> FromIntLiteral for NonZero<T> {
    const fn from_int_literal(input: u128) -> Self {
        Self::new(T::from_int_literal(input)).expect("input must be non-zero")
    }
}

and then using non-suffixed integer literal would be equivalent to using FromIntLiteral:

assert!(item_fits_exactly_in_packaging(1000));
// equivalent to
assert!(item_fits_exactly_in_packaging(const { FromIntLiteral::from_int_literal(1000_u128) }));

const NEGATIVE_ONE: NonZero<i32> = -1;
// equivalent to (assuming `impl const Neg for NonZero<i32>`)
const NEGATIVE_ONE: NonZero<i32> = -const { FromIntLiteral::from_int_literal(1_u128) };

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That trait could potentially be made more general by passing the constant as a &[u8] in some specified endiannes (probably little?) so it could potentially represent numbers greater than 128 bits. It could also pass in whether it was negative with a separate argument.

@ehuss ehuss added the T-lang Relevant to the language team, which will review and decide on the RFC. label Mar 7, 2025
@scottmcm
Copy link
Member

scottmcm commented Mar 7, 2025

TBH what I really want for this is a const Trait-based approach that's extensible for everyone. I want someone writing a u256 type to be able to have an automatic-for-the-users conversion from literals too. And for let x: EvenU32 = 3; to trigger a panic! in const, resulting in a nice compile-time error message.

@pthariensflame
Copy link
Contributor

TBH what I really want for this is a const Trait-based approach that's extensible for everyone. I want someone writing a u256 type to be able to have an automatic-for-the-users conversion from literals too. And for let x: EvenU32 = 3; to trigger a panic! in const, resulting in a nice compile-time error message.

Languages with prior art for this (albeit none restricting it to compile time) include Swift and Haskell (and some of its derivatives, like Agda). Julia also has a precedent but by different means (user-defined conversion and promotion semantics).

@programmerjake
Copy link
Member

TBH what I really want for this is a const Trait-based approach that's extensible for everyone.

I think that would need the source text including the sign to be passed into the const Trait, so you can properly handle things like
let v: NonZero<i8> = -0x80; (only valid when you know it's negative) or
let v: u256 = 0x1_00000000_00000000_00000000_00000000; (too big for u128). it would be nice to allow it to handle custom suffixes too, e.g. 0x123_hdl_u12 which I'm using in fayalite to create a Expr<UInt<12>> literal.

maybe have an API like:

pub ~const trait FromIntLiteral<const SUFFIX: str>: Sized {
    ~const fn parse<E, F: ~const FnOnce(fmt::Arguments<'_>) -> E>(negative: bool, text: &str, parse_error: F) -> Result<Self, E>;
}
pub ~const trait FromFloatLiteral<const SUFFIX: str>: Sized {
    ~const fn parse<E, F: ~const FnOnce(fmt::Arguments<'_>) -> E>(negative: bool, text: &str, parse_error: F) -> Result<Self, E>;
}
// maybe have `FromStringLiteral` too?

that way you can try to parse things at runtime without panicking on parse failure, and you can just pass in a function that panics at compile time.

example:

pub struct UInt<const N: usize>(...);
pub struct SInt<const N: usize>(...);
pub struct Expr<T>(...);

impl ~const FromIntLiteral<*"hdl_u12"> for Expr<UInt<12>> {
    ...
}

impl ~const FromIntLiteral<*"hdl_i5"> for Expr<SInt<5>> {
    ...
}

pub fn f() {
    let a = 123_hdl_u12; // has type Expr<UInt<12>>
    let b = -0x10_hdl_i5; // has type Expr<SInt<5>>
}

@clarfonthey
Copy link

clarfonthey commented Mar 8, 2025

Definitely like the idea of a fully generic solution, although it feels like we're still a ways off from being able to accomplish that.

That said, I think that there's a clear issue with literal suffixes here, and that's that a lot of people (myself included) use them to remove type ambiguity, but this change would effectively break that. Now, for example, 1u32 could be either u32 or NonZero<u32> and it's not possible to explicitly clarify which one you want, even though we know that 0u32 is unambiguously u32.

I'm not 100% sure if this is a particularly big issue (especially since it would probably default to the specified suffix type if it's ambiguous, rather than choosing NonZero) but it feels like this is a main blocker to any proposal of this kind being merged. Specifically, it should decide whether NonZero is always coercible from literals of their respective types, or if there should be dedicated nz* suffixes (like nzu32) to explicitly clarify that. I don't like the idea of the nz* suffixes, but it feels like that should be figured out as part of this RFC.

Also: it's worth saying that since NonZero::new and Option::unwrap are now both const-stable, you can always do const { NonZero::new(x).unwrap() } to get nonzero constants, even though it's substantially more verbose than what's proposed. So, it's not about being able to have these constants as much as it's making it easier to make them. And I will admit to doing stuff like const ONE: NonZero<u32> = NonZero::new(1).unwrap(); because it's obnoxious to type.

@neeko-cat
Copy link

I love this proposal! I was surprised when I was learning rust and discovered this didn't exist.

This means that tests and examples are much more noisy than real-world usage for an API that uses NonZero, giving a false impression of API complexity and discouraging API authors from using NonZero despite its advantages.

This has been my exact thought process multiple times, I usually code my own nonzero! macro for my programs, so if this RFC is approved (even if you go down the macro path) it will improve my "user experience" greatly.

@scottmcm
Copy link
Member

scottmcm commented Mar 10, 2025

On suffixes: I think we shouldn't add any more of those, but should instead find a way to make an ascription syntax that people can be happy with.

While it's true that we had to get rid of x:i32 because of the typo-proneness, that problem doesn't come up the same way with literals since 3::foo isn't ever a valid path. So, spitballing, we could say that 5:MyOddU24 is legal (giving that : extremely high precedence) and then we never need to add more suffixes to tokens ever again.

(AKA the compiler only calls the hypothetical FromIntegerLiteral trait for things without suffixes. The suffixes that exist remain for back-compat and always give the builtin types. So 3_u32 would in effect -- if perhaps not in implementation -- get translated to 3:u32.)

@sandersaares
Copy link
Author

sandersaares commented Mar 10, 2025

Based on initial comments received, I have adjusted the RFC text to remove coercion from constant variables (limiting proposed behavior to only literals), and to clarify some open topics the initial draft failed to address, defaulting to relatively conservative viewpoints to start from until/unless the discussion suggests more ambitious moves are necessary:

  • For suffixed literals like 0u32 the coercion would only apply if the T in NonZero<T> is inferred or explicitly the same as the type in the literal suffix.
  • The coercion only applies to literals, so expressions like 123 | 456 would not be coerced to NonZero (and neither would the literals within). In other words, behavior in case of expressions (even if const-folded) is unchanged and trying to use 123 | 456 as a value for a NonZero would be an error, as it is today.

This still seems to satisfy the main mission of making tests and examples more readable, while avoiding complexity in corner cases and avoiding new syntax/suffixes.

@sandersaares
Copy link
Author

The "only literals, not expressions" logic has some interaction with rust-lang/compiler-team#835 in that if literals do not have signs, this coercion would be limited to positive values only, which would be imperfect (though still valuable).

@programmerjake
Copy link
Member

programmerjake commented Mar 11, 2025

The "only literals, not expressions" logic has some interaction with rust-lang/compiler-team#835 in that if literals do not have signs, this coercion would be limited to positive values only, which would be imperfect (though still valuable).

the API I proposed in a thread above basically treats literals as a signed literal where conversion from a literal to a user-defined type has separate inputs for the sign and for the literal's text. basically, if negation does something weird, the conversion function knows the input was negated (even if it's -0), so can properly apply whatever weird thing you want negation to do.

@coolreader18
Copy link

I agree with @clarfonthey wrt literal suffixes and explicitness - in the "alternatives" section about suffixes, I feel like the theoretical allowedness is a bit the wrong way round, and I'd rather see:

takes_nz(1); // ok
takes_nz(1_u32); // error: u32 always means u32
takes_nz(1_u32_nz); // ok

Having a new int literal coercion that doesn't have an explicit suffix associated seems like it could lead to some annoying inconsistencies, e.g. a macro that takes a $x:literal that it expects to have a suffix to determine type inference, though that might be pretty niche.

@matthieu-m
Copy link

On suffixes: I think we shouldn't add any more of those, but should instead find a way to make an ascription syntax that people can be happy with.

While it's true that we had to get rid of x:i32 because of the typo-proneness, that problem doesn't come up the same way with literals since 3::foo isn't ever a valid path. So, spitballing, we could say that 5:MyOddU24 is legal (giving that : extremely high precedence) and then we never need to add more suffixes to tokens ever again.

I think the type ascription approach is really powerful, though I would argue that implicit coercion should perhaps still be on the table, regardless, as type ascription for every argument will still be quite verbose.

That is, if we imagine a FromIntegerLiteral trait, implemented for NonZero<T>, compare:

explicit(1:NonZero, 2:NonZero, 4:NonZero, 5:NonZero);

implicit(1, 2, 4, 5);

The literals are drowned out in the first case! Hence, much like type suffixes are typically rare in code thanks to type inference, I think there's a strong argument for coercion here so long as type inference has figured out the argument types.

Note: and yes, syntax wise, I do find type ascription preferable to suffixes too; using suffix literals in C++ is painful as the conversion operators need to be imported separately, and sometimes clash when you try and import a bundle of them... it's also always not clear which operator creates which type.


I would guess we'd need the fully family, then: From[Integer|Float|Str|Bytes]Literal.

And in terms of signature, I think it could be interesting to support "raw" literals in them. That is, even for the Integer|Float case, passing the token string, rather than the pre-parsed integral:

  • It's easy enough for the implementer to call .parse() and deal with the output.
  • It's impossible for the implementer of the trait for a BigNum, or a Decimal type, to use the built-in types. The former suffers from even i128 and u128 being too short, the latter suffers from f64 having already lost the precision.

@JarredAllen
Copy link

+1 on the eventual FromIntegerLiteral const trait that lets you make arbitrary types from integer literals via parsing a token string, but (as an outsider who isn't in the loop on a lot of things) it looks to me like that would be a long ways off, and this is something that could be concretely done now to improve things while also being backwards-compatible with the future implementation (assuming you allow implicit type inference when the exact type can be inferred) (except maybe not allowing 5_u32 to be inferred as NonZero<U32>, that feels weird to me). So I'd love to see this happen as a quick-and-easy improvement (at least, I hope this would be quick and easy) while the necessary const trait infrastructure and such takes time to mature.

@leonardo-m
Copy link

Some way of using number literals to safely create some types (like NonZero, BigInts, and so on) could be good to have (as long as it's simple to use, to understand and it's safe). I even opened a ER that asks for a little macro that avoids the unwrap for compile-time NonZero values. But some people are working at introducing pattern types in Rust, like "u8 is 1 ..", that probably will never really deprecate the NonZero std lib types, but I think the design of everything in this issue should be seen in a wider contest of adding pattern type literals too.

@AaronKutch
Copy link

I was skeptical of the special casing of the RFC as of writing, even though I totally run into this everywhere (not just tests as the RFC repeatedly emphasizes, if it were just tests I would just use macros and short functions like my awint::bw function). However, after reading the literals-only const trait suggestions from the comments, I am 110% onboard with this. This would be a huge deal for my const-capable bigint crate https://crates.io/crates/awint and some of my other crates and strategies. If it is general enough, I could straight up write literals like -246_i100 and have them converted to my InlAwi type instead of always needing to write inlawi!(-246_i100). I really want such a trait to handle my arbitrary fixed point syntaxes like -0x1234.5678_p-3_i32_f16 (https://docs.rs/awint/0.18.0/awint/struct.ExtAwi.html#impl-FromStr-for-ExtAwi). Preferably, before this happens we would fix the issue where hex float literals can't parse at all, and of course we need those const traits. I would be fine with a forwards compatible thing that only works with NonZero<T>, but we must make sure there aren't any weird edge cases if it were rewritten to use the future trait.

@ahicks92
Copy link

I like the RFC and would find benefit from it.

I don't like most of the taking it further discussions. Consider:

// Doesn't matter what this does, just that `+` isn't normal addition.
const WEIRD: MyWeirdInt = 123 + 456;

Versus:

const INTERMEDIATE: u32 = 123 + 456;
const WEIRD: MyWeirdInt = INTERMEDIATE;

Now to be clear I recognize that everyone is only discussing literals right now. But the refactor is something one would expect to work. I would argue against Rust having custom implicit conversions. But if it did have custom implicit conversions they should really work everywhere and the obvious refactors shouldn't change the meaning of things.

I don't believe this problem comes up often enough but if someone convinced me it did I would instead propose raising or re-raising custom string suffixes e.g. "123"mytype or somesuch (which isn't an argument against this RFC, It's fine to have the occasional special thing in stdlib imo). Such an approach at least makes it clear that there is magic, and it is easy to explain that it desugars to e.g. mytype::new("value") or some callout to a macro.

I see other problems with a generalized approach, but they're off topic.

@programmerjake
Copy link
Member

Versus:

const INTERMEDIATE: u32 = 123 + 456;
const WEIRD: MyWeirdInt = INTERMEDIATE;

Now to be clear I recognize that everyone is only discussing literals right now. But the refactor is something one would expect to work.

well, imo its something one would expect not to work, if you have type MyWeirdInt = u64 it doesn't work in Rust now.

I would argue against Rust having custom implicit conversions.

I agree custom implicit conversions are generally bad. 1

The way I see it, literals can be one of many types, and the actual type they have is deduced from how they're used, so 123 + 456 could very well be a u32 or a i8 but only if they're deduced to be a i8 and then they can't also be a u32 at the same time -- so each literal (after macro expansion and type deduction) has only one type, and they behave like that type, no other types. All we're doing is expanding the set of possible types literals can end up being. The FromIntegerLiteral trait is used only so Rust can figure out what to do for a literal of whatever type the literal deduces to -- so FromIntegerLiteral is a conversion, but only in the sense that writing 1234u32 is a conversion by the compiler from the string "1234" to the type u32.

Footnotes

  1. though there could be exceptions -- e.g. custom &T -> &DynTraitButForCPlusPlus.

@kennytm
Copy link
Member

kennytm commented Mar 17, 2025

I don't like most of the taking it further discussions. Consider:

// Doesn't matter what this does, just that `+` isn't normal addition.
const WEIRD: MyWeirdInt = 123 + 456;

Versus:

const INTERMEDIATE: u32 = 123 + 456;
const WEIRD: MyWeirdInt = INTERMEDIATE;

I think the consensus is both are not going to work. In the first case you need explicit conversion make it clear whether you want u32::add or MyWeirdInt::add.

// assuming `MyWeirdInt: const Add`.
const WEIRD: MyWeirdInt = (123 as MyWeirdInt) + (456 as MyWeirdInt);

@ahicks92
Copy link

@programmerjake
They are obviously different at a close look if you know the semantics. I'm not disputing that.

But I am not a fan of it anyway. Different code should be clearly different. It should be easy to spot NFC.

Literals are sort of a weird any-type case (I don't actually know how Rust formalizes this) but + always means the same thing today even across the boundary of const vs non-const. In "weird" cases the chain at least always obviously starts with something "weird".

The reason I like this RFC anyway is because of all the times I've avoided NonZero just because it is annoying to have your nice fancy config struct or whatever be easy to fill out or the like. But I think that supposing it has a future direction to generalize the mechanism either in the sense of evaluating expressions or in the sense of supporting user-defined types is dead on arrival unless someone is very very clever. So from my perspective the open question is whether or not this special case always being special is worth it. I genuinely don't know how I'd answer that question. The RFC itself acknowledges easy alternatives and makes me feel silly for not just writing const fn nz(x) => NonZero::new(x).unwrap() somewhere in my code to be honest. I'm a good enough coder I should have found that on my own.

@sandersaares
Copy link
Author

Updated RFC text to address a few points raised in recent weeks:

  • Clarified that the coercion does apply for unary negation expressions (e.g. -123 is a valid candidate for coercion).
  • Adjusted suffix handling to only permit the coercion if there is no suffix (e.g. 123u32 already has a type and does not get coerced to NonZero<u32>).

@oli-obk
Copy link
Contributor

oli-obk commented Mar 31, 2025

Is your intention that the following would be legal?

let x = 5;
let y: NonZeroU8 = x; 

Because that's an order of magnitude more impl work than all the other examples shown in this RFC.

@sandersaares
Copy link
Author

sandersaares commented Mar 31, 2025

If "you" refers to me then no - the RFC is intended to only be scoped to (optionally negated) integer literals, so in this case it would need to be let y: NonZeroU8 = 5 to be applicable.

Is there something that suggests otherwise? Perhaps I can clarify the text accordingly.

@oli-obk
Copy link
Contributor

oli-obk commented Mar 31, 2025

Cool. Yea maybe add it to the future possibilities section. And add an example to the reference section stating that it's explicitly not supported

@programmerjake
Copy link
Member

programmerjake commented Mar 31, 2025

Is your intention that the following would be legal?

let x = 5;
let y: NonZeroU8 = x; 

I'd expect that to be legal...both x and y should have type NonZeroU8.

Because that's an order of magnitude more impl work than all the other examples shown in this RFC.

why would that be more work? wouldn't the type deduction semantics be similar to:

let x = lit.into();
let y: NonZeroU8 = x;

@oli-obk
Copy link
Contributor

oli-obk commented Mar 31, 2025

wouldn't the type deduction semantics be similar to:

We'd need to either go the route where we allow this for all types that implement specific traits or add more magic to Inference. Just using the type hint within a single expression is 15 LOC in a single location with no new traits or type system logic

Same reason

let x = 62 as char;

compiles, but

let x = 62; 
let y = x as char; 

Does not

@programmerjake
Copy link
Member

programmerjake commented Mar 31, 2025

wouldn't the type deduction semantics be similar to:

We'd need to either go the route where we allow this for all types that implement specific traits or add more magic to Inference. Just using the type hint within a single expression is 15 LOC in a single location with no new traits or type system logic

well, I think most of the benefit is when using APIs that have arguments or fields that are NonZero, where you don't want to have to have a separate let with a type annotation just so you can have new literal types...so I've been expecting we'd want to change it to have an unstable trait Literal that has impls for all integer types and this RFC just adds impl Literal for NonZero, or something 100% equivalent to that.

if you want to restrict literals to not have just as powerful type deduction for NonZero as for primitive integer types, so type deduction can't work through two lets, then I think we should not accept that as making the language too inconsistent and special-cased.

so, since this works:

    let a = 45;
    let b: u8 = a;

I'd expect this to too, since the literal's type is essentially a type variable:

    let a = 45;
    let b: NonZeroU8 = a;

@programmerjake
Copy link
Member

programmerjake commented Mar 31, 2025

let x = 62 as char;

tbh i'm surprised that compiles...

@oli-obk
Copy link
Contributor

oli-obk commented Mar 31, 2025

Just want to make sure what is being proposed. Because what @programmerjake is asking for needs a lot more reference level explanations and some guide level explanations

@oli-obk
Copy link
Contributor

oli-obk commented Mar 31, 2025

since this works:

    let a = 45;
    let b: u8 = a;

That works because integer literals not matching their type's range is just a lint.

@programmerjake
Copy link
Member

since this works:

    let a = 45;
    let b: u8 = a;

That works because integer literals not matching their type's range is just a lint.

why would lints matter here? all of a and b and the literal 45 have (deduced) type u8 here...this makes me think there's a misunderstanding here about something...

@oli-obk
Copy link
Contributor

oli-obk commented Mar 31, 2025

let a = 45; gives a and the 45 an inference type, not type u8. After the fact, the let b: u8 = a; turns that inference type into a concrete u8. But any processing of the literal has already happened. This is obviously just an implementation detail, but literals are so common, changing this is both

  • likely to cause inference changes in other situations and,
  • likely to cause compile-time performance regressions

@MetaflameDragon
Copy link

I'm coming in as an outsider in terms of Rust RFCs (though I've explored Rust in a fair amount of depth), but I still wanted to share some loose thoughts I had on this topic, also regarding pattern types.

NonZero<T> is a type for one specific invariant (not zero), but I saw some parallels for more general invariant constraints with traits like Send, Sync, Freeze/NoCell, and so on. The idea being, constant/compile-time-known numeric values could implement particular traits as part of their type, such as NonZero (a new trait in this case), PositiveValue/NegativeValue, etc. This set of (auto-)traits should be small and each should be trivial to compute (the set of T's for Divisible<const T> isn't reasonable to infer automatically). For numbers, it still feels like a very general-use-case scenario to warrant some auto traits with deeper integration.

#3786 (comment)

That trait could potentially be made more general by passing the constant as a &[u8] in some specified endiannes [...]. It could also pass in whether it was negative with a separate argument.

A comment higher up also seems to also refer to more general compile-time invariants like that.


Unfortunately, this could lead to every number becoming a separate type, and risk overcomplicating compile-time invariants and safety and make working with numbers "too perfect". In any case, NonZero<T> is useful enough, but I feel like the rest of it is a task better-suited for Pattern types. Those provide more natural syntax, have greater flexibility, and also would work better for subtyping/variance. (Pattern types were also brought up in another comment higher up, as well as in prior art in the RFC.)

I think that pattern types could also solve the nz suffix issue better. Requiring nz for non-zero numbers feels very impractical, since every non-zero literal is, well... non-zero, proven by the method of "just look at the number". Type inference for generics would require this clarification, since NonZero<T> is not a subtype of its T, whereas ..=-1 | 1.. is a subtype of ... With pattern types, a constant 1 would have a pattern type of 1, making it naturally compatible with ..=-1 | 1.., 1.., and ... However... I think that would also require specialization at that point, but since specialization would require impls to be non-overlapping (and thus unambiguous), it would make the nz suffix unnecessary for type inference.

More discussion on this matter would probably be off-topic for NonZero<T> at this point, but I still think it's worth highlighting pattern types more, since they could sidestep many issues that a separate invariant type would cause.

@ahicks92
Copy link

ahicks92 commented Apr 6, 2025

@MetaflameDragon
The concrete difference between NonZero and every other trait you mention here is that all other invariants are either not in the const system (e.g. Send, Sync) and/or derived (Copy, Clone, maybe Freeze but not sure what the status is). They also don't change type layouts and mostly don't ever exist in memory/at runtime (Clone is a special case and not really in the bucket of type properties, but in my experience everyone thinks of it that way). Those properties "belong" to the type.

NonZero is the only one that is a struct and broadly speaking it is a normal struct with methods that prevent you from ending up with a 0 inside it. This is about type layout. For example you can put NonZero in repr(C), as function parameters, etc. and enums like Option will "steal" the bits. You can for example combine this with C-layout structs (but probably shouldn't without wrapping it in Option). That is NonZero represents two things: (1) the property of not being zero, and (2) the position of nonzero bits in aggregates and the ABI.

Also as an aside, the only special thing about NonZero today that you can't get yourself is that the compiler understands that it can steal those bits--you can perfectly well write a restricted integer crate otherwise, you just don't get that one specific advantage. I don't think that a major type system extension really solves anything unless it's done specifically for better niche optimizations. With the stabilization of const traits you can just:

trait Limiter {
    fn min() -> u32;
    fn max() -> u32;
}

Or similar, and with a bit of macro magic just spit them out. The real thing would be more complicated and involve various generics, but you'd get errors of the quality you'd expect and reach the limits of what a built-in solution could do pretty easily. Honestly I can even think of a way to make number: Limited<u64, 1, 5> work, I think, though I'm not going to try

@MetaflameDragon
Copy link

@ahicks92 Good point about the influence on the ABI/layout, I didn't think of that when comparing it to the other (auto-)traits.

I guess I'd hope that niche optimizations would be possible (even to a greater extent) for pattern types, since they define things like "gaps" for Option (the "bits to steal") as part of their type (item count or covered range vs. memory representation size).

@ahicks92
Copy link

@MetaflameDragon
Niches in pattern types is something Rust could probably implement. But there's a lot there that has to be considered and then of course teaching it. For instance what is the type of multiplication of two pattern types? When are they able to combine for a mathematical operator? How do you suddenly get a non-pattern value into a pattern type since that now means adding some sort of fallibility to a built-in type? I'm not asking you to answer these questions to me, I'm pointing out the questions you'd have to answer to everyone, probably in an RFC.

I could be convinced that they're a great idea (but I have no decision making power to be clear). But not here. This RFC is a nice minimal change. A bit magic, but easy to explain and useful in the cases where NonZero ergonomics hurt.

In practice though I'm pretty sure that Rust can't use a lot of niches like that because when you have enums with multiple data-carrying variants, I'd think the niches have to line up to leave the same hole and they don't. That optimization has changed over the years and I'm not on top of it, e.g. I think maybe there's handling for Option<Option<T>> these days, but the layout of a struct can't depend on what possible enums exist so moving fields around to overlap niches is probably infeasible. Arguably you could emit code at runtime to unpack special shifted structs, but that's complex and very much not zero cost. Even if it did work though, adding a field would change whether the optimization applied or not, and let's not even mention generics.

@sandersaares
Copy link
Author

sandersaares commented Apr 11, 2025

Added counterexample to RFC text to clarify that the proposed behavior is limited to only (optionally negated) literals:

fn foo(count: NonZero<i8>) { }

let i = 123;
foo(i); // Error - the coercion only applies to literals and `i` is not a literal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T-lang Relevant to the language team, which will review and decide on the RFC.
Projects
None yet
Development

Successfully merging this pull request may close these issues.