Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Passkeys (experimental) #4234

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft

Conversation

tonkku107
Copy link
Contributor

Adds support for passkeys behind an experimental config option.

Includes:

  • Adding, removing and renaming passkeys in the account management frontend
  • Signing in with a passkey

Does not include (will probably have to be separate PRs later):

  • Ability to remove the password to go completely passwordless
  • Registering with a passkey instead of a password
  • Creating a new passkey during recovery instead of a password

Can be reviewed commit per commit.
Frontend is hacked together from existing components pending designs.

@tonkku107
Copy link
Contributor Author

tonkku107 commented Mar 17, 2025

I was expecting webauthn-rs's dependence on openssl (3) to be a problem, but I wasn't expecting there to be a CI test for it 😁. Maybe webauthn_rp is a better alternative in this case. Library has been swapped

When I'm running the frontend tests locally, they're only rendering error pages and I don't know why. Would appreciate some pointers as to what I'm missing.

Either way, in the meanwhile @sandhose could we request for the designs?

@zacknewman
Copy link

zacknewman commented Mar 22, 2025

@tonkku107, I only grokked the code; but it appears you plan to be storing a 16-byte identifier for the challenges. I'm not aware of the memory requirements, but RegistrationServerState and AuthenticationServerState are "only" 48 bytes and 64 bytes1 in size respectively when using passkeys which may still be small enough that you can avoid storing them in the database entirely. The challenge they contain is based on a randomly-generated u128 which means they have the same amount of "entropy" as the 16-byte ID you plan on using. I realize that multiplicatively that is still 3 or 4 times the size of a 16-byte ID, but it's something you may want to consider.

Footnotes

  1. At least on x86_64-unknown-linux-gnu platforms when compiled with rustc 1.85.1 using the release profile.

@CLAassistant
Copy link

CLAassistant commented Mar 23, 2025

CLA assistant check
All committers have signed the CLA.

@zacknewman
Copy link

zacknewman commented Mar 26, 2025

Some quick comments pertaining to webauthn_rp:

  • As stated privately, 0.3.0 adds Decode impls for CredentialId<&[u8]> and UserHandle<&[u8]>, so some of this will be able to be simplified.
  • In the case you still need to convert from something like CredentialId<Vec<u8>> to CredentialId<&[u8]>, you may be able to write something like (&cred_id).into() pending type inference. Still not the most ergonomic, but much more concise than CredentialId::<&[u8]>::from(&cred_id).
  • When serde_relaxed is enabled, RegistrationVerificationOptions::client_data_json_relaxed and AuthenticationVerificationOptions::client_data_json_relaxed are true when relying on the Default impl. The idea being since you already opted into "relaxed" JSON deserialization, then it makes sense for this to default to true.
  • 0.3.0 adds associated functions like Registration::from_json_relaxed; thus serde_json::from_str::<RegistrationRelaxed>(json).map(|reg| reg.0) could be written as Registration::from_json_relaxed(json).
  • AuthTransports implements Encode and Decode, but I'm guessing you know this and you're intentionally storing the JSON array of strings instead because you don't want to deal with u8 being converted to an i8 since many databases don't support unsigned integers. Obviously it's faster and more concise to use the u8. Don't be "fooled" by AuthTransports' API. While it "behaves" like a collection (e.g., Vec), it's actually super efficient and is internally based on a u8 "flag".

Longer comment:

Origin validation is "complex" in that it's not particularly possible to support all the possible cases "out of the box". To fully empower downstream code, AuthenticationServerState::verify and RegistrationServerState::verify are polymorphic so that downstream code can use its own custom type to perform this validation so long as the type implements PartialEq<Origin<'_>>. In fact the type that is used for origin validation need not be the same as the type used for top origin validation—admittedly, it's likely extremely rare for one to want to perform origin validation differently than top origin validation. For strict validation, one should just use &str/String since the slice of &str/Strings are checked to match exactly with the Origin<'_>. For more complex matching, DomainOrigin<'_, '_> may be "good enough". If it is not flexible enough for your needs though, then you will have to implement your own type that does what you want.

DomainOrigin<'_, '_> first parses Origin<'_> via DomainOrigin::try_from which among other things requires Origin<'_> to have no scheme or a non-empty scheme and either no TCP/UDP port or a "canonical" TCP/UDP port. Upon successful parsing, case-sensitive matching is performed (e.g., the host example.com is not the same as Example.com).

A few examples that will fail origin validation when DomainOrigin { scheme: Scheme::Any, host: "localhost", port: Port::Any, } is used:

  • "://localhost": while scheme is allowed to not exist, if one exists, it must be non-empty; thus DomainOrigin::try_from will fail.
  • "https://Localhost": "localhost" doesn't match exactly with "Localhost".
  • "https://localhost:0443": 0443 has a leading 0; thus DomainOrigin::try_from will fail.

Examples that will succeed:

  • "localhost": scheme and port are allowed to not exist.
  • "<U+0000>://localhost:0" where <U+0000> is NUL: any non-empty sequence of Unicode scalar values is a valid scheme and any 16-bit unsigned integer in "canonical" form is allowed.

In summary if this is too relaxed, too strict, or both for your needs; then you have no choice but to define your own type that parses Origin<'_> in its PartialEq<Origin<'_>> impl and does whatever you want.

@tonkku107
Copy link
Contributor Author

tonkku107 commented Mar 26, 2025

may still be small enough that you can avoid storing them in the database entirely

Storing in memory won't be an option since multiple instances can be run at once.

The challenge they contain is based on a randomly-generated u128 which means they have the same amount of "entropy" as the 16-byte ID you plan on using.

I've added an Ulid to be consistent with the rest of the stuff in this project. There's actually only 10 bytes of entropy within the ID as Ulids reserve 6 bytes for a timestamp.

you may be able to write something like (&cred_id).into()

Yeah I know and I wasn't a big fan of that and ultimately moved it out to be more explicit about the weird type stuff going on.

but I'm guessing you know this and you're intentionally storing the JSON array of strings instead

I used JSON or strings wherever I could, but if database efficiency is a concern to the maintainers (which I doubt given my 60GB synapse DB full of JSON strings), that could be changed.

A few examples that will fail origin validation

Can you even get any of those examples out of a browser? 😄
The point there is to just have more lax verification for development purposes anyway. The real concerns about too strict or not strict enough would come from the case where the host of public_base is fed into DomainOrigin::new. Imo, it's just right for most deployments but some additional config options might be necessary for some people's needs.

@zacknewman
Copy link

zacknewman commented Mar 26, 2025

Can you even get any of those examples out of a browser? 😄The point there is to just have more lax verification for development purposes anyway. The real concerns about too strict or not strict enough would come from the case where the host of public_base is fed into DomainOrigin::new. Imo, it's just right for most deployments but some additional config options might be necessary for some people's needs.

The code is probably OK. My comments were isolated to webauthn_rp without any regard with the rest of the code nor clients. The pedant in me just wanted to point out these technicalities that for most "normal" developers could be ignored. I'm a "weirdo" that obsesses over stuff though. For example, maybe I have my own modified version of Firefox that does allow such an origin. That's probably not a concern for this code though.

Since we are here, there is one more point I do think is important to make and that is related to the CBOR-parsing that happens internally. I strictly adhere to CTAP2 canonical CBOR encoding form. For the most part this is fine and is technically required by the spec; however there have been real-world cases where implementations violate it. For example, there was a time Firefox was not correctly encoding the kty value. Some libraries allow such incorrect encodings and others don't (e.g., I believe webauthn_rs rejects it).

I think most browsers correctly do it now though, so I haven't experienced issues in my tests. It's something you may want to be aware of though. Perhaps the bigger problem though is the CBOR-decoding that is done in webauthn_rp intentionally only supports the extensions that it "knows about". For example, if you attempt to use the largeBlob extension, a failure is guaranteed to happen even if you implement your own JSON deserialization. This is because that extension is not only a client extension but also an authenticator extension which means when RegistrationServerState::verify or AuthenticationServerState::verify is called, the CBOR-parsing that takes place will fail; furthermore you can't just "scrub it out" either since signature verification will fail—although honestly "scrubbing the data" would be a PITA, and you may want to implement your own library instead at that point.

@zacknewman
Copy link

zacknewman commented Mar 26, 2025

I used JSON or strings wherever I could, but if database efficiency is a concern to the maintainers (which I doubt given my 60GB synapse DB full of JSON strings), that could be changed.

Well that is not true since you do use CredentialId::encode and UserHandle::encode. Both implement Serialize and Deserialize; thus you could change the code so that you store JSON strings. Additionally, you don't seem to care about MetadataOwned::decode; thus you could also call Metadata::into_json instead of Metadata::encode.

@tonkku107
Copy link
Contributor Author

Well that is not true since you do use CredentialId::encode and UserHandle::encode.

I use serde_json::to_string and serde_json::from_str for CredentialIds. User handles I constructed from the Ulid's bytes.

Additionally, you don't seem to care about MetadataOwned::decode; thus you could also call Metadata::into_json instead of Metadata::encode.

Did not notice that as I was looking for the serialization impls. The metadata is quite useless anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants