Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add section on Ecosystem Compatibility. #1203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add section on Ecosystem Compatibility. #1203
Changes from all commits
acf92d4
81fdc25
e642687
0202074
0dc9ca5
654d351
9648a0d
8d89cb5
8a449c9
a56b15a
056d2a1
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate the need to distinguish mDL and other tokens as "digital credentials" from "verifiable credentials".
I think addressing this issue more directly would be better.
Maybe something like "the term verifiable credentials is often synonymous with the term digital credentials, however our specification only uses the term verifiable credential to refer to the JSON-LD based data model we define here...."
The word "digital" is kinda glossing over the key distinction between credential formats, which is indeed media types... and W3C only recognizes vc+ld+json / vc+ld+jwt, vc+ld+cose as media types for W3C Verifiable Credentials (with or without any proof).
The rest of our text basically says this, and the context references make it clear that W3C only defines verifiable credentials in terms of JSON-LD... But other organizations and the industry as a whole does not necessarily agree....
For example, I disagree that the term "verifiable credential" only applies to JSON-LD....
But I do agree that "vc+ld+json" is a standard serialization of a digital credential that requires JSON-LD, and is specified by W3C.
I remain in favor of and excited for other media types built on top of
vc+
that support other securing or data model considerations... and that DO NOT require ANY RDF or JSON-LD, or mappings.VCDM v1 included support for things that were not the JSON-LD data model, the now infamous broken JWT and CL signature formats we have since corrected in v2.
This correction was accomplished in 2 ways:
In my opinion, this PR rewrites a critical part of the day resolution, because it requires mappings, or externally defined media types, which was a major problem in v1 and v1.1...
And which we never got consensus to do, see:
The day 3 resolution specifically stated, that mappings are ONLY required for media types defined by this working group... but this text asserts that mappings are required by any external specification, to conform to this working groups data model... and that a "mapping" is the key to creating a "verifiable credential"....
Let me crystal clear, "a mapping is not needed to create a verifiable credential" or "any other digital credential"...
A mapping is required to produce
vc+ld+json
from any other well defined media type...It might seem like these are the same things, but they are not.
One of them is precise and easy to achieve, the other is marketing grand standing, that attempts to place W3C and JSON-LD at the center of the digital credentials universe, and claims the exclusive right to refer to "verifiable credentials" as being "JSON-LD" things.... But you can see from the media type that we requested registration for that this is not the case... because:
application/vc+ld+json
defines that there areapplication/vc
with different structured suffix.Its fine to state that a mapping is required to produce
application/vc+ld+json
from other registered media types....Its not ok to assert that only a mapping to JSON-LD is required to create a "application/vc+" or a "verifiable credential" in the "general sense of the word".
Not correcting this, will lead to a lot of very bad credential formats, that are needlessly shacl'ed (pun intended) to RDF and JSON-LD, which are not very well adopted technologies and which have been a continuous source of contention since the inception of the work in v1.
All that being said, I don't think the current text regarding "digital credentials" and "verifiable credentials" is far off from something I would approve.
And my comment here is none blocking, but there are other blocking comments on the sections that follow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking a bit more about this... I think
application/vc
is really a "claimset" media type... that does not assume any serialization or securing format...application/vc+ld+json
assumes JSON-LD serialization, and no securing format...application/vc+ld+jwt
assumes JSON-LD serialization, and JWT securing format...The use of structured suffixes is intentional extensibility, and we should say that directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the trouble of "vc+ld+json" vs "vc+ld+jwt" is that they are completely different things. Other one is data model, another one is with digital signature.
I don't think a thing like
application/vc+ld+jwt
should even exist, as if you look into JWS spec, thetyp
is media type for the complete JWS andcty
is for the JWS Payload. How it is transported over HTTP is a third media type.Anyway, listing ecosystem support for signature schemes or a like should be transformed into "their corresponding data models". This can also be implicitly achieved by stating that the transformation algorithm input is data model and output is data model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, agreed, not to mention that IF we were to have something like this, it would have to be
application/vc+ld+json+jwt
to make any sense from a structured-suffix processing standpoint. I've also seenapplication/vc+jwt
floating around, which also doesn't make sense asapplication/vc
is a meta-model (at best) with no serialization.Still mulling the rest of @OR13 input, but wanted to highlight that @mtaimela's feedback resonated with me, glad to know I'm not the only one thinking that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great discussion.
There are several layers (inside out)
Data model and vocabulary are the core of the specification and if I understand correctly, most of the community agrees with it.
Format: JSON-LD, JSON (please continue reading)
Protection: JWS, Data Integrity Proofs, ...
Other processing elements (JSON-LD features, internationalisation, representation, ...)
As mentioned by @mtaimela we need to distinguish between different layers.
If VC is protected, the media type is defined by the protecting/securing mechanism.
The signature needs to define how to learn about the content/payload type.
In the case when a payload is protected, we'll have:
Payload may always be processed as JSON or JSON-LD, depending on the use case and requirements.
It is important to know that we have 3 signature formats
enveloping and detached will carry the payload inside of the signature, whereas the enveloped signature carries the signature inside of the payload.
Processing the payload: whether the payload needs to be processed as JSON or JSON-LD depends on what/how we want to match or process the claims. This information is important when requesting or presenting VCs as all actors need to agree on the processing rules of the payload.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Orie, thank you for the reply, very insightful and much appreciated. I stand corrected on the JWT explicit typing, didn't know that JWT deviates that much when compared to JWS/JWE. I still don't see the value of tokenizing VCs, as you will always introduce a VP when disclosing those, so they are never standalone (maybe some coupon case could exist without holder binding), but this topic is very much outside of this PR 😇.
To not derail the original discussion too much, could you please comment on how you see the transformation algorithm input/output?. I see that unlocking this question, will unlock the rest.
Bi- or uni-directional transformations:
This is quite important, as with option 1 all signature options must have a source data model to transform from, into
vc+ld+json
(which is then processed). The source data model could then be used for signature purposes how they wish. This follows the normal boundaries each signature scheme has and allows explicit media types for all tents.If option 2 is allowed, then we see transformers that converts data model into signed data model, and this most of the time violates other signature schemes (like JAdES).
This question will also impact the following line
As then the securing of credentials, like JWT, could be removed from VCDM. These are not data models, but "signature schemes" to secure the data models. This also solves the problem of identifying the media types which should be used, and will push them to the owners of the tents.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replying on this megathread, because it's where this problematic comment was made:
application/vc
is NOT a media type, at all, noway, nohow, unless a registration was submitted to IANA out-of-band and out-of-WG and out-of-CG, in which case we cannot proceed with anything to do anything with it until we have a copy of the relevant spec.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this megathread can/should be solved under the vc-jose-cose, however we're maybe very close to a solution.
Main question is actually about JWT (not JWS, usage and role of JWS are clear).
This is my view and I might be wrong (but this is how I understand the JWT RFC - JWT is a profile for JWS: compact serialised + JWT claims that need to be defined by the use case):
If JWT is seen as a securing mechanism that uses/adds additional JWT claims to the payload to protect the content, then all the issues and conflicts can easily be resolved. In a conceptual model we should be able to distinguish between an issuer and a signer. In most cases issuer == signer.
If my understanding of the JWT is wrong (if so, let me know) it would be good to clarify the position and usage of JWT.
@selfissued @OR13 can you help with this one or if there's anyone else in the group who can help. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please open an issue on vc-cose-jose and move this discussion there. If there is anything left in this thread that applies to this PR, please make concrete change suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msporny Moved it to w3c/vc-jose-cose#132. This thread should not block this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not appear to state explicitly anywhere "according to the transformation rules of the aligned credential formats". This might be implied, but it would be better to explicitly state it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thus I like @dlongley's proposed edit "Specifications that describe how to perform transformations to ....". This makes it explicit that transformations must be specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 5c5b4d8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a little confused. PR description says
This is guidance for the VCWG, and thus does not need to be placed into normative text in the core data model.
but the section is normative.Not sure how useful these MUSTs and SHOULDs are, given that they try to restrict specifications potentially sitting in other SDOs...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR background noted "This is guidance for the VCWG, and thus does not need to be placed into normative text in the core data model." which specifically applied to this part of the Miami day 3 resolution: "Serializations in other media types (defined by the VCWG) MUST be able to be transformed into the base media type."
IOW, that guidance was for us as the VCWG, not for anyone else. Clearly, if we (as a WG) define another media type, that other media type must be able to be transformed into the base media type. It was setting the expectation that if anyone in the VCWG proposed something that /didn't/ map to the base media type (effectively splitting the ecosystem into two or more incompatible data models), that WG members would object.
The rest of the normative guidance tells other groups what the normative expectations are if they want to say that their specification is compatible with the ecosystem defined by the VCWG. It is effectively the contract between other WGs and this WG. Those other WGs don't need to follow any of the guidance if they do not want to be compatible with the ecosystem defined by this WG, so we're not imposing anything onto another WG unless they want to say that they are ""compatible with the W3C Verifiable Credentials ecosystem", and if they do, we give them clear guidelines on what our expectations are to clear that bar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree with this statement, but wondering if we can be a bit more helpful for implementers in applying guidance on how to achieve this requirement. would a link to an implementers guide section, or note about
@vocab
doing this automagically be helpful?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vocab
is not enough to preserve the v2 context, because it has several processign directives that are applied to specific predicates, that alter the shape of the RDF graph... for example@id
,@type
,@container
, etc...There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good for the "transformed conforming document" (aka vc+ld+json and vp+ld+json).... but I would go further here... A similar sentence should exist for the required "input media type"...
For example, https://datatracker.ietf.org/doc/html/draft-ietf-rats-eat-21#name-eat-as-a-framework
If an EAT token media type is used as input, it goes without saying (??? does it ???) that any JWT or CWT best practices should be followed.
This point has caused us a lot of pain in the past, see: #1149
Sometimes the BCP for the input media type, will make mapping to the BCP of the output media type difficult... or feel like a downgrade in security properties... Let's tackle this directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
^ Is that the normative language you're asking to be added? If so, BCPs aren't a MUST... they're usually a mixture of guidelines that don't need to be followed... and, as you said, it goes without saying that specifications should follow BCPs associated with those specifications. IOW, I don't know if this statement needs to be said as it's true of any specification.
Can you please craft some language that might have the effect you want (if the language above isn't what you were going for)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that is in contradiction with the rest of the specification. Specifically, I doubt that using
@vocab
alone is linked data best practices. We should remove that item. Otherwise we would need to revisit the whole JSON-only processing part since it would also mean that an issuer has to follow linked data best practices when issuing VCs which imo would not include using@vocab
but rather define context and vocab in an appropriate way.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above says SHOULD not MUST. I think it makes sense to say that you SHOULD follow best practices, but you are not required to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or we would need to add all the linked data best practices to the VCDM 2.0 specification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally, I'd prefer to have some examples when it is fine to not follow best practices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also provide a link to what it means to follow linked data best practices. Perhaps we have one in the spec already but I couldn't find it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, that's not what we should imply... I don't think anyone is arguing that something that doesn't follow Linked Data best practices is "invalid"... all that is being said is that "it could be much better." :)
If it does imply that, we'll need to rewrite it. All it means is that those VCs w/ issuer-defined properties aren't following LD best practices... they're taking a JSON-y approach, which may or may not be appropriate for the use case. A use case where it might be appropriate is a "closed cycle / closed ecosystem use case" -- where everyone knows each other and what each property means and doesn't feel the need to publish a vocabulary or semantics. This feature was put into the specification for people that didn't want to publish contexts or vocabularies (for whatever reason).
Generally speaking, they can use issuer-defined terms if it makes sense for them to do it. Some people might grumble, but it doesn't violate the specification for them to not document how their system works via the mechanisms provided by the specification.
What we're saying here is: You SHOULD document your terms... but you don't have to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msporny Why has this to be part of the transformation section? My point is that this has to be added somewhere else. Perhaps to the media types section? It is not specific to the transformation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@awoie @msporny thank you for the clarification.
If I understand correctly the WG agrees that VCs can be used in both open and closed systems and that in closed systems use of private or issuer-defined claims is acceptable. I believe it is in everyone's interest that the data model is used in both open and closed systems.
I guess then it is just a matter of wording, or? (this and PR1202 clarify many points)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It applies to the base media type, I've added an issue to track this here: #1217
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alenhorvat wrote:
Yes, correct.
This is why the default
@vocab
value going into the base context achieved consensus. Note that some of us still think it's a bad idea to not document your terminology used, even in closed systems, but this compromise was made to ensure the individuals that wanted to not have to define contexts or vocabularies, but use alternate mechanisms, could do so.Yes, we just need to get the wording right in #1202 and #1203.