diff --git a/.markdownlint.yml b/.markdownlint.yml index 3aed93b..251c245 100644 --- a/.markdownlint.yml +++ b/.markdownlint.yml @@ -16,3 +16,6 @@ MD033: false # MD034/no-bare-urls - Bare URL MD034: false + +# MD036/no-emphasis-as-heading/no-emphasis-as-header +MD036: false diff --git a/meetings/2025-02/february-18.md b/meetings/2025-02/february-18.md new file mode 100644 index 0000000..ccbb5ae --- /dev/null +++ b/meetings/2025-02/february-18.md @@ -0,0 +1,1554 @@ +# 106th TC39 Meeting | 18 February 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Kevin Gibbons | KG | F5 | +| Keith Miller | KM | Apple Inc | +| Oliver Medhurst | OMT | Invited Expert | +| Dmitry Makhnev | DJM | JetBrains | +| Gus Caplan | GCL | Deno Land Inc | +| Daniel Ehrenberg | DE | Bloomberg | +| Jesse Alama | JMN | Igalia | +| Michael Saboff | MLS | Apple Inc | +| Ujjwal Sharma | USA | Igalia | +| Ashley Claymore | ACE | Bloomberg | +| Nicolò Ribaudo | NRO | Igalia | +| Philip Chimento | PFC | Igalia | +| Michael Ficarra | MF | F5 | +| Linus Groh | LGH | Bloomberg | +| Samina Husain | SHN | Ecma | +| Ron Buckton | RBN | Microsoft | +| Kris Kowal | KKL | Agoric | +| Mikhail Barash | MBH | Univ. of Bergen | +| Daniel Minor | DLM | Mozilla | +| Aki Rose Braun | AKI | Ecma International | +| Luis Pardo | LFP | Microsoft | +| Chip Morningstar | CM | Consensys | +| Eemeli Aro | EAO | Mozilla | +| Ben Lickly | BLY | Google | +| Mathieu Hofman | MAH | Agoric | +| Sergey Rubanov | SRV | Invited Expert | +| Chris de Almeida | CDA | IBM | +| Luca Casonato | LCA | Deno | +| Istvan Sebestyen | IS | Ecma International | +| Waldemar Horwat | WH | Invited Expert | +| Richard Gibson | RGN | Agoric | +| Shane F Carr | SFC | Google | +| Erik Marks | REK | Consensys | +| Justin Grant | JGT | Invited Expert | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +RPR: Thanks everyone for coming to Seattle, we’re ready to begin the 106th meeting at TC39. + +## Secretariat comments + +Presenter: Samina Husain (SHN) + +* [slides](https://github.com/tc39/agendas/blob/main/2025/tc39-2025-005.pdf) + +SHN: I was not able to attend your in-person meeting in Tokyo. It’s nice to see everybody again. A lot has happened within Ecma in the last couple of months. So let me just give you a short report. This is the overview of what we have done within the secretariat and different activities and just going to highlight some things on source map and new TCs we have and just a reminder on the executive committee meeting date and deadlines that you should all be aware of and then some general collaborations that we continue to have. + +SHN: So first congratulations for the first addition of source maps. Yes, there you are NRO. Congratulations. Good work. Really hard work. I mean, it was brilliant that you did this in the year. So we’re looking forward to the next edition. It is published and was approved at the December GA and want to bring that to everybody’s attention if you were not already aware. + +RPR: Round of applause. + +SHN: Also want to highlight TC55, a lot of work has gone into that. I want to thank everybody for that work. The chairs are Luca and Andreu, and Aki is supporting you and I think you will have the official meetings. I know there’s some things that need to be taken care of administratively from Ecma perspective. This is brilliant. Last year 2024 was the year that we had two new TCs and this is one of them. We’re very happy for that. That was approved at the December GA. If there are individuals that want to be involved and organizations that would be great. + +SHN: The other TC that is TC56 is the first that touches on artificial intelligence, I think this is very good for ECMA to come into a new space. The members are involved here not only IBM but ServiceNow has shown interest and Microsoft shown interest and Hitachi shown interest that are current members and university that are active not for profit members and we have a couple of invited experts and looking to have them transition to be membered and mainly Cisco being one of them. I think that is excellent. This also enables us to be more visible in the AI space. The AI Alliance has a number of individuals involved in the meeting. Hopefully this grows not as this particular TC but other TCs in the area of AI which of course as everyone knows is a dynamic and constantly changing topic. So thank you for the committee that worked on this also. It was approved in December. + +SHN: Also from the Ecma new management for 2025, we had the final votes in December and first I want to thank all of the people who did nominate and were nominated for this. We had seven members who are nominated for non-ordinary member positions. We had only four positions. It’s excellent to see the interest. I hope your interest still remains and that you are still active and want to consider to submit your name as we do these elections every year. This is the final slate that was selected. So DE who is sitting right here in the room is our new President of Ecma. And Jochen Friedrich was previous president and vice president and treasurer is Luoming Zhang and important to note that management roles are for ordinary members. We only have six ordinary members. We are looking forward to their support and driving Ecma. + +SHN: From the Executive Committee (ExeCom), this is really exciting and we have new members and many are sitting here. Jochen will chair and we have Theresa O’Connor from Apple and Chris Wilson from Google and Mikhail Barash (University of Bergen) and Patrick Dwyer (ServiceNow) and I don’t know if some are online. It’s great that we have new thoughts and new discussions. Peter Hoddie (Moddable) has been on it before and also important to remember all of the relevance and the past information and leverage on that. So it’s great that Peter is on. And Ross Kirsling (Sony) is also on it. Maybe Ross is also online. This is very new and dynamic executive committee. We will have our first meeting coming up in March. So again thank you everybody for your interest. And congratulations to everyone that was appointed. You may applaud since Daniel was here. + +SHN: Just a few general items and the important one, this year ExeCom is earlier usually it is middle or end of April. It is in March. If you are, as I assume, going to have addition 16 and 12, the different standards, I need to have an indication of that. We need to bring that to the attention of the ExeCom that is actually in two weeks. I can make that update immediately. But I just want to get information from the committee that both those two additions are your intentions for the GA in June. You have plenty of time for the opt-out, but I do need indication that you want it. And then maybe the editors can let me know that. The committee can let me know that before this meeting is over. Perhaps if you can by the end of the day, I would appreciate that. + +SHN: We have talked about liaisons before. Ecma has a number of liaisons that we keep active like the JTC 1 and W3C and we have had historical with IETF and in the past there were people of IETF coming to Ecma and TC39 meetings. Over time, there hasn’t been a lot of cross-contribution. I would not want to see the liaison disappear. To build a new liaison is always more complicated. I would like to keep it going. I’m asking if there’s somebody on the committee that could be the point of contact between TC39 and IETF, I would appreciate that. If there is no topic or nobody of interest, I also need to know. I need to figure out how to maintain that from the secretariat. I don’t want to drop this. I suspect as we move forward with new topic and areas of IETF is doing we should have visibility of what is going on to make sure it’s not impacted any of the work that the committee is doing. Please give it thought and approach me with who the individual can be. + +SHN: We have a strong invited experts list together with AKI and the chairs we’re always reviewing them. We had an extremely busy end of last year. I have had an extremely busy last few months of the end of the year were a bit of a blur if I look back with the activities. I have been very flexible. So the invited experts that typically would have ended their term at the end of 2024, I have not stopped and they will continue on and because we appreciate everybody’s contribution and would like to review them at the end of the year. Sometime in the next little while I will be approaching organizations and members that are related to the organization that are invited experts if they are considered to join. Please be aware to get an email from me. It is a touch base just to inquire and see how your organization is looking at Ecma and if they would like to join because ideally that is the way we want to move forward. So continue on with your work and together with AKI we will work with the chairs to make sure that the list remains accurate. + +SHN: With W3C we have done the transition of Winter CG to the generally used term WinterTC and new TC55 that is formed at Ecma. In doing so, I had a number of conversation with W3C leadership and also have met with their CEO Seth Dobbs and they’re keen to see how Ecma can be more engaged with W3C and I know there are already members representative here that are active in different spaces of W3C. Those are the four bullets I brought up in a previous meeting. I would like indication on the **horizontal review** is the one that is the lowest hanging fruit. Does it in any way impact us? I would like guidance. It is important. I want feedback on that. I’m looking at Shane right cross from me. If there are other topics that are important, reach out to me and AKI and the relationship with W3C after the moving of TC55 opens up more opportunities for more collaboration. There may be other projects going on in W3C to come to Ecma. There are common members and eventual invited experts. W3C has a broad scope and a certain focus. Ecma has a value it can bring. If there are topics, we have the opportunity to continue the strong relationship here. We have the liaison contact Michael Smith. I and AKI are active with W3C and have a strong relationship and make sure we are active with W3C and TC55. + +DE: Great presentation. A couple comments that I wanted to add. Just to emphasize Winter TC or TC55 the significance of this it’s about standardizing an initially a subset of the web platform that’s supported on web interoperable server environments such as hopefully Node.js and Deno and complementary of TC39 work and scope of things and certain things we thought about in committee that ended up getting web specs for. Hopefully this can help regularize in the ecosystem some of the core concepts. I hope more TC39 members will be interested in joining this group. Development has switched completely to Ecma unlike previous plans where the CG and the TC would live side by side. And thanks LCA and ABO and OMT for the leadership. For the IETF leadership, role, although the IETF have a broad scope and it can be intimidating to be liaison for the whole organization, there’s starter tasks that people could work on. For example, we have certain line types with Ecma that are through registered through IETF such as CycloneDX and small tasks and coordinate to register to the cycle NDX types to point to the new version of the standard. There’s been successful interaction of IETF and TC39 such as making the new version of the datetime format standardized and I hope people consider taking on this extra role. + +SHN: Thank you DE. And give applause for the TC55 team. Thank you. I’ll go quickly through the annex slides and then open for any other questions. The annex slides as you see are up loaded. Also reminding everybody of the invited expert and the role on the TC and reminding everybody on the code of conduct which is very important as already noted by RPR on how we work and how we exchange our interaction and work together and collaborate. As always the summary and conclusion are extremely important. It has been an improvement and I thank you very much for all of that and hope to continue to build on that. AKI does the minutes and then I finalize them. So it’s really helpful for us to have these and for everybody else. + +SHN: There are a number of documents. I listed them. There’s a lot of documents. If you’re interested and you see by the title of the document that you would like to know more about it, you may ask the TC39 chairs to access them for you or ask AKI. These are the things that taken place since the December meeting. There’s quite a number of things. I noted in the earlier slides and let you run through those. If there’s anything you want more information, I’m happy to share them. There’s a huge list. The meeting dates as reminder we have the next one coming up that will be virtual. I did statistics last year. You all had average of over 75 participations in meeting whether hybrid or online which is excellent. You’re the largest committee and work actively. And of course every year you have the new additions. So it’s quite dynamic. + +SHN: Our dates just as said, what we have coming up I have noted in red that are ExeCom date is 5th and 6th of March. 5th is the main date. There will be secretariate and your chairs that report at the meeting. I know about your new additions before that time and you have plenty of time for the opt-out. The GA is later days of June and if you work backwards you have time for the opt-out. That’s slides and information from the secretariate. I want to thank everybody for supporting AKI and what she’s doing for TC39 and other technical committees that we have. So please keep supporting AKI and her questions and support to you. + +RPR: Thank you for an excellent report. + +### Speaker's Summary of Key Points + +Update on recent ECMA activities, including the approval of new Technical Committees (TCs), organizational changes, and upcoming deadlines was provided. + +#### Key Points + +* The first edition of Source Maps was successfully published and approved at the December 2024 + * Two new TCs approved in 2024, marking a significant milestone for ECMA. + * TC55: Focuses on standardizing a subset of web platform APIs for use in server environments + * TC56: The first ECMA TC focused on AI. +* ECMA’s Management and New Executive Committee for 2025 was announced. Elections were finalized in December 2024 and the first Executive Committee (EXECOM) meeting is scheduled for March 2025. +* Important updates regarding Edition 16 and Edition 12 of existing standards need to be submitted before the EXECOM meeting. +* Maintaining and expanding ECMA liaisons, the liaison role between TC39 and IETF needs to be maintained. Volunteers are needed to act as a contact person between the two organizations. +* Invited Experts, the list of invited experts has been extended beyond December 2024. Organizations that have invited experts should consider transitioning them into formal ECMA members. +* Reminders of meeting minutes and summaries have significantly improved, thanks to Aki, the chairs and all the participants. + +## TC39 Chair Election + +RPR: So CDA may have an intro to say and AKI can be helping in the room and conduct things when the rest of us step out. + +CDA: As many folks are aware, we really only do an election when we have a change in the roster. And so that’s what we’re going to be talking about today. Next slide, please. So this is the full roster of folks beyond all of our esteemed delegates and invited experts, you can see we have the chairs and facilitators and convenors of task groups and editors and administrator and secretaries. The only thing that is changing is the chair group. The chairs themselves are unchanged. But we are having a couple of facilitators formally stepping down. So that would be BT and YSV whom we very much appreciate their help as facilitators and as chairs previously. And they will be—we will be looking to add to the facilitators. DLM and DRR have volunteered to help us out. The delta is the individuals and the same Ecma members are still represented. That’s nice to have continuity there. At this point we are going to step out to let the committee do its thing. So that’s myself, RPR, USA, DLM, and DRR. + +_Notes paused during discussion on new chair group_ + +AKI: We’re on the record as having consensus. + +SHN: You have consensus. I do, of course, have to ask—do you accept the role? Do you accept to continue? + +RPR: Yes, accept gladly. + +SHN: It is relevant to ask the question even though you are voted in, you all accept your role? Is that the same for you Chris? + +CDA: Yes. + +SHN: Thank you. If anybody who has been appointed doesn’t wish to accept the role, they should speak out. Congratulations. + +RPR: Very thankful to BT and YSV for serving for many years as facilitators. + +SHN: Ecma Secretariat will take the action to recognize the work of both BT and YSV. + +### Conclusion + +* The proposed chairs and facilitators group has been elected by acclamation: +* Chairs: Rob Palmer, Ujjwal Sharma, Chris de Almeida +* Facilitators: Daniel Minor, Daniel Rosenwasser, Justin Ridgewell + +## ECMA-262 Update + +Presenter: Kevin Gibbons (KG) + +* [spec](https://github.com/tc39/ecma262) +* [slides](https://docs.google.com/presentation/d/1jgEaNaq6W7hZSKQILZ1F2sC1jTjRKwwnuqCGgi6iyQc/) + +KG: Now that we have done the election, we move on to KG with the 262 status update. So the update, there’s a few normative changes. The last two haven’t. RegExp modifiers landed and import attributes and JSON modules did land. And apologize for the delay and should be landed today or tomorrow. No significant editorial changes because not had as much time as we like. But there have been a number of manual ones. None of which we put here. Approximately the same list of upcoming work, what we started to chip away at bits and pieces here. And then of course the most important thing as mentioned it’s a new year and time to prep a new addition of the specification. We intend to freeze the specification meaning no further normative changes except possibly any bug fixes and we’ll go in after the end of the meeting after we land the things that are going for Stage 4 or I believe there’s a normative PR although the normative PR we won’t land because it requires implementation. Never mind. The Stage 4 proposals, any or all of the Stage 4 proposals that are proposals which are attempting to achieve Stage 4 will be landed before we freeze the specification. But then that will hopefully happen by the end of the meeting and we will get everything in and all tied up. We will post the link to the reflector of the candidate specification. At that point the IPR will begin. Watch for that link. But expect it to happen approximately Thursday. That’s all I have. + +RPR: Just checking for anything on the queue. No questions on the queue. Give it a moment in case anyone wants to say anything. Thank you KG. + +NRO: I have a question. Didn’t get on the queue in time. The definition that you are going to remove from the spec, what is the reason? Because I was just going to add to that the source map spec. + +KG: So the problem is that that defines something like 3% of the terms and definitions in the specification. Most terms are defined closer to where they’re used or in some relevant section. And just the sort of mish-mash of random stuff in there. And to the extent possible, we thought it would make more sense to consistently put terms closer to where they’re used. + +NRO: Okay. + +MF: I will note that Ecma specifications are expected to have a Terms and Definitions section but that is one of the places where we have chosen to diverge. + +RPR: Thank you. And so Kevin, you’ll write up a summary in the notes? + +KG: Yes. + +### Speaker's Summary of Key Points + +Normative changes since last time: regexp modifiers, import attributes, JSON modules candidate ES2025 spec will be cut at the end of this meeting, to include any proposals which get stage 4 at this meeting. + +## ECMA-402 Update + +Presenter: Ujjwal Sharma (USA) + +* [spec](https://github.com/tc39/ecma402) +* [slides](https://notes.igalia.com/p/q98gbOaS6) + +USA: Good morning everyone over there in Seattle. I hope you’re having fun. I’m going to similarly talk about what’s happening on 402 briefly. We have a few normative changes that are in the works. The first one is a relatively old one. This is basically everything that we have ongoing at the moment for the first one this is a normative note that was requested by a previous TG1 and they’re still soliciting feedback on this. For the next one we have new numbering systems by FYT and this has been improved by TG2 and should come to TG1 soon that will up grade for the 16. Next we have another normative pull request by RGN which has been sort of being discussed in TG1 at this moment. But there’s no agreement yet. And then we have three new ones, so expect to see them soon to TG1. But that’s all the normative changes we have. The last one especially being uncovered by Test262. That’s nice. + +USA: For the editorial changes, the first one is sort of rearranging the spec more consistently that’s already been merged and then we have two more editorial pull requests at this moment. Apart from that, we also have to merge a meta change by AKI that helps us generate better PDFs, but it’s currently blocked by a change to ecmarkup. + +USA: Similarly to ECMA-262, we plan to freeze the spec soon including the Stage 4 proposal DurationFormat. We plan to do it at end of week and start the IPR opt-out before the next meeting same as 262. And that’s all. + +RPR: Thank you. Currently there is no one on the queue. Would anyone like to ask questions? All right, then, thank you Ujjwal. + +### Speaker's Summary of Key Points + +Ongoing changes to the ECMA-402 spec were discussed and USA announced plans to freeze the spec at the end of the week. + +## ECMA-404 Update + +Presenter: Chip Morningstar (CM) + +* [spec](https://ecma-international.org/publications-and-standards/standards/ecma-404/) +* no slides presented + +CM: JSON is kind of like conditioner—it helps keep your data soft and manageable. And ECMA-404 is like the label on the package—the sort of classic timeless unchanging verbiage that everybody has come to expect and appreciate. + +RPR: Thank you for that product analogy. I think that’s short and sweet. It doesn’t even need a summary. + +## Test262 Update + +Presenter: Philip Chimento (PFC) + +* [repo](https://github.com/tc39/test262) +* no slides presented + +PFC: I just have a list of points to deliver verbally. I don’t have slides if that’s all right. Since the last plenary meeting we have a few updates from Test262. We have merged tests for iterator helpers and we deferred imports now. We have a number of maintenance updates based on feedback from limitations. We also merged a test suite into the staging folder from SpiderMonkey. This is the first time that we have done something like this but these are tests that previously lived only in the SpiderMonkey code base that they ran in their Firefox testing in addition to Test262 but they weren’t specifically to SpiderMonkey and could be used for under implementations. So we merged this whole batch of tests which are now available for everybody to run. Kind of similar to what V8 is doing with their two-way sync. So look for more work of that kind in the future. Then I have some less good news. Igalia work is less than previously because we were funded by a grant that is finished. Any help from proposal champions in reviewing tests for proposals is very much appreciated because as a whole, the maintainers group has a bit less time for Test262 than we had before. Then I have some exciting news: SFC who is in the room with us is working with students at the University of Bergen, Norway in the upcoming semester and some are working with Test262. If you have projects for Test262 get in touch with SFC or me or JHD. And we would love to hear your ideas. But that’s it for me. I will paste the summary into the notes. + +SFC: Also MBH is the main contact with the university of Norway. He’s a great person to speak to if you have ideas for those contributions. + +NRO: As champions, not just champions but the problem I have seen is the champions write the tests but they need someone to review them. If you’re familiar for any reason with the proposal 262 somewhere or some browser even if you’re not the champion, having more people reviewing would be a great help. + +OMT: Just going to say that some of the SpiderMonkey tests are very heavy and can crash engines so we disabled them on test262.fyi. + +PFC: I remember hearing that. That is something that we should look into whether it’s changing those tests so they’re not quite so resource-heavy or having a slow flag that test runners can skip. + +### Summary + +* We have tests for Iterator helpers and/or iterator sequencing +* We have tests for deferred imports +* Various maintenance and updates based on implementations +* We have merged a test suite into staging from SpiderMonkey. These are tests that previously lived in the SpiderMonkey codebase that were in addition to test262, but were not SpiderMonkey-specific and so could be useful for other implementations. +* Igalia's involvement is less than previously, because our grant finished +* SFC is working with students from U of Bergen in the upcoming semester and some will be working with test262. If you have ideas for student projects, get in touch with us or SFC or MBH. + +## TG3 Report + +Presenter: Chris de Almeida (CDA) + +* [site](https://ecma-international.org/task-groups/tc39-tg3/) +* no slides presented + +CDA: TG3 continues to meet weekly focused—we’ve only been talking about security impact of proposals and various stages. So, yeah, that’s it. Please join us if you are interested in security. + +## TG4 Report + +Presenter: Nicolo Ribaudo (NRO) + +* [site](https://ecma-international.org/task-groups/tc39-tg4/) +* [slides](https://docs.google.com/presentation/d/1-suKLKywflKUDzTqVBxl-dEI2bJSfG5dl205BRtVCK4/) + +NRO: I have slides. TG4 part as SHN said before we have the first edition of the spec published, thanks to everybody who helped us get this done. We have some plan submitted changes to the scope and most implemented from the bikeshed ecmarkup and push is one of the mange proposals we were working on needs to define how to parse some strings in the syntax within source map strings and it’s just easier and matches with the bikeshed. And then also the same with some of the existing parsing that we have. For example, parsing mapping is collaborating with the actual grammar. And because we link with concepts and makes it easier. But it makes it harder toiling to web concepts. Also even though this is not good motivation but means it’s not anymore to figure out how to get bikeshed to convert nicely to the Ecma PDF format. + +NRO: And the proposal scopes. It’s going well. We keep having monthly meeting about it, the champions started writing spec text. If you’re interested in it, Simon (SZD?) and JRL from Google did analysis of trade-offs of scope information about size and accessibility. So you can go check it in the repository. And this is it. Everybody is always welcome to join our meetings. Let me know if you need help getting involved. + +### Summary + +* ECMA-426, 1st edition approved by the Ecma GA +* The TG is in the process of converting the specification from bikeshed to markup +* Work on the scopes proposal is proceeding well + +## TG5 Report + +Presenter: Mikhail Barash (MBH) + +* [site](https://ecma-international.org/task-groups/tc39-tg5/) +* [slides](https://docs.google.com/presentation/d/1jLeg1TuaD1l535LF_gf4dJaF7sz-Z10Gm5cXbmonHnk/) + +MBH: TG5 was chartered about a year ago. We have since then had nine meetings almost monthly. 10 to 15 attendees. And we also have TG5 workshops that are in person or hybrid meetings. And one of workshops will be this Friday. So examples of topics that we discussed are here on the slide. Friday workshop will be a presentation by a research group at the University of California San Diego about the messageformat study and identify more proposals that could benefit. But we try to look into other directions where academic results can be brought in for the work of the committee. And plans for 2025 in particular with establishing new collaboration, so IETF there is research and analysis of standard-setting processes research group and W3C is a process community group. We want to establish some collaboration with them and arrange a break-out room at TPAC 2025 to try to engage more universities in the web standard work. Related to this there will be a workshop on programming language standardization at the European Conference on Object-Oriented Programming this July. That’s it. + +RPR: Excellent. Ashley. + +ACE: Can you please link to the slides. + +RPR: Nothing more on the queue. Please summarize that for the notes. Next up it’s back to Chris with updates from the code of conduct committee. + +### Summary + +TG5 has had regular monthly meetings since it was chartered one year ago. In addition, TG5 has arranged three Workshops co-located with hybrid meetings, and currently plans another Workshop in Spain this May. TG5 intends to establish contact with IETF [Research and Analysis of Standard-Setting Processes Research Group](https://datatracker.ietf.org/rg/rasprg/about/) and [W3C Process Community Group](https://www.w3.org/community/w3process/). + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +* [site](https://tc39.es/code-of-conduct/#code-of-conduct-committee) +* no slides + +CDA: Pretty quiet on the code of conduct front. We don’t have any new reports or anything we’ve had to deal with. I think we got like a weird AI like generated report that didn’t really make any sense and so we just ignored it. But other than that, that’s it. As always, anyone interested in joining the code of conduct committee can reach out to one of us. Thank you. + +RPR: Thank you for protecting us from the bots. Next up, we have GCL with don’t call well known symbol methods for RegExp on primitive values. + +## Don't call well-known Symbol methods for RegExp on primitive values + +Presenter: Gus Caplan (GCL) + +* [spec pr](https://github.com/tc39/ecma262/pull/3009) +* no slides + +GCL: This is a pretty small change. Basically for some background here, Node.js and Deno write a significant amount of their implementation in JavaScript. So one of the things they do is attempt to harden the JavaScript that they use so that user code cannot break their implementation as it runs. So this specific needs consensus change has to do with—basically there are five or six methods on `String.prototype` (match, matchAll, replace, some of these) that accepts a parameter which when—well, we can look at the text for this. Basically it accepts this RegExp parameter and then if it is not undefined or null, it will attempt to look up `Symbol.match` on it and call that. Otherwise, it will create a regular expression and invoke the normal matching function on that. And so there are match, replace, replaceAll, search, split. All of these functions do that with their respective symbols. + +GCL: Basically what this change is proposing is that when you call these methods with the argument with any primitive but in practice with a string, we should not read the symbol off of that, because it can interfere with the internals. + +GCL: So that’s the background there. We have a little bit—this is from a little bit ago. But we did here that core-js never implemented. They implemented it the way it was in the pull request and nobody ever complained. That seems positive. We can go here. + +JHD: I think this is great. There’s no reason any of us could ever want a primitive to be regular-expression-ish. And vast majority of current and past TC39 members seem to hate this entire protocol anyway. So less usage of it sounds good. If I have any polyfills that need to be updated for this, I’m enthusiastic to do so. + +GCL: All right. Seems like nobody else has much to say. I guess I will ask—oh, plus 1 with no comments from OMT says it makes the implementation easier. Did you want to say anymore more? + +OMT: No. + +RPR: And Dan Minor, did you want to speak? + +DLM: Sure. We talked about this. It seems fine. I guess there’s a small, small chance of some compat problem but doesn’t seem likely. + +SYG: Also seems good. Any thoughts on in the small but nonzero chance it is not compatible to do next? + +GCL: If it’s not compatible, we would just not do it, I guess.? + +JHD: Alternatively if not compatible because some website is defending on one specific kind of primitive that it’s making RegExp for some crazy reason. If that’s the case, we could also adapt this to allow that to one kind of primitive to still be checked. But not the others. + +GCL: Maybe. I think that would sort of defeat the purpose of the change in the first place. + +JHD: Fair enough. + +GCL: But, yeah, I don’t expect this to be web incompatible just due to how niche it is. + +KG: I didn’t want to mess that up there. I’m in favor of this. I’m pretty sure when we did the disposable protocol, we did the same thing. We said that the disposed symbol is not looked up on primitives, only on opt-outs. And I just want to call out that in the rare occasion that we are going to be introducing new protocols, I think we should follow this sort of precedent and just sort of always omit primitives from protocols in the future from symbol-based protocols. + +RPR: Shall we call by consensus? + +GCL: Yes, do we have consensus? + +RPR: There are no objections. Then congratulations, you have consensus. + +GCL: Thank you everybody. + +SYG: Sorry to interject, I didn’t have time to type this into the queue. So I want to double check since Test262 sometimes fall through the cracks for normative PRs, I want to make double sure that GCL or whoever else is signed up to write these tests. + +GCL: Yeah, we will take care of that. + +SYG: Great, thanks. + +### Speaker's Summary of Key Points + +* Node.js/Deno write a large portion of their implementation in JavaScript, and so aim to ensure this implementation is hardened against user code. +* `String#{match,matchAll,replace,replaceAll,search,split}` will no longer look up the protocol symbol when called with primitives, rather than just undefined/null. +* Expected to be web compatible due to core-js never shipping the spec’d behavior + +### Conclusion + +* Consensus +* Deno will write tests + +## Float16Array for stage 4 + +Presenter: Kevin Gibbons (KG) + +* [proposal](https://github.com/tc39/proposal-float16array) +* [spec PR](https://github.com/tc39/ecma262/pull/3532) +* no slides + +KG: `Float16Array` and `Math.f16round` and the data methods for reading and setting `Float16` values. This proposal has been at Stage 3 for a while. Implementations were, of course, several orders of magnitude more difficult than the specification. Specification is very simple and basically just copies the existing float 32 array spec and says binary 16 instead of binary 32 float everywhere. The implementations have to do what a lot of the work for each platform at least when they are trying to optimize this. But they all done that to the extent that they are comfortable shipping at this point. So JavaScriptCore that is Safari and SpiderMonkey that is to say Firefox are both shipping already. Chrome I believe made the decision to start shipping in March, I believe, is when the version—this will be on by default. There is an open pull request for specification. There are of course tests which were prerequisite for Stage 3. + +KG: This is also starting to be adopted by other web specs which was the intention. The canvas people are starting to work on having higher colour-depth canvasses that will be backed by or at least make use of Float16Arrays and I know also the WebNN spec is interested and possibly making use of Float16Array and neural nets make more sense than float16 than float32. I believe it should meet all of the criteria for Stage 4. + +KG: Especially because this is a proposal that requires more from implementations than most proposals. This isn’t just syntax, you are getting in there and writing assembly. I want to make sure there is no concern of implementation before going forward. But I believe it’s ready for Stage 4. + +DLM: Thank you. SpiderMonkey team supports this for Stage 4. + +SYG: Sounds good to me. I do confirm that the plan is to turn it on by default in Chrome 135 that should—let me bring up the calendar here that should hit stable first of April. + +RPR: There’s a comment. LGH Implemented this in one of the smaller engines and plus one for Stage 4. + +KG: I would like to formally call for consensus. We will had plus ones and give everyone an opportunity to object. + +RPR: Congratulations. You have Stage 4. + +### Speaker's Summary of Key Points + +* Spec is simple, implementations hard +* Implemented and shipping or almost shipping in all three major browsers +* Ongoing web API usages in progress in Canvas and WebNN + +### Conclusion + +Stage 4 + +## Redeclarable global eval vars for stage 4 + +Presenter: Shu-yu Guo (SYG) + +* [proposal](https://github.com/tc39/proposal-redeclarable-global-eval-vars) +* [spec PR](https://github.com/tc39/ecma262/pull/3226) +* no slides + +SYG: Great. Thanks. Before I go into it, do people care to hear a recap of what this is about? + +RPR: Won’t hurt. Just quick, brief. + +SYG: Very well. So this was originally needs consensus PR to fix the corner case in dealing with vars at the global scope. The global scope is to say the least very strange because among other things it is an open scope meaning that if you have a script tag and you introduce something and then you have another script tag and you mutate the global scope, multiple script tags don’t get their own global scopes. They get the same global scopes. It’s always open. It’s never closed unlike the function scope where, you know, the scope doesn’t extend beyond these two braces. So without getting too much into the weeds here, the upshot is basically there is a—in the current spec, there is a special mechanism on the global scope with the slot called `[[VarNames]]` to specifically track global bindings via the `var` keyword. This is a slight pain in the ass for implementations and basically boiled down to the extra bit on the property descriptor for everything on the global object, only on the global object. + +SYG: I proposed we get rid of the special case and basically treat `var`s as we treat other non-configurable global properties. If you don’t know the weird corner of JS, `var`s on the global area are not just a binding that you must refer to with the bare identifier. They can show up on the global as a property. We have a special case for those properties on the global object that were introduced via `[[VarNames]]`. If we get rid of the specific case, it eases the implementation burden, it gets rid of a weird corner in my opinion but it is normative in that it changes behavior. + +SYG: And I think the main consequence is basically this: This change allows you to write this snippet basically if you have a `var` X and introduce via eval direct eval on the global text this is a global property. Currently in the spec because var names are specially tracked on the global object, if you try to have the same name lexical binding also on the global scope this will currently be an error. That is what this `[[VarNames]]` slot was for. I tried to argue this is really not a use case anybody cares about to error in that way and to get some simplicity we should just allow the shadowing basically. And so this is currently disallowed. But it will become allowed. Nevertheless, don’t do this. I don’t know why you would do this. So just don’t do it. + +SYG: So that is the actual change. And the status is that we have all shipped it basically. This was the existing behavior in Chrome. Nobody really complained. Safari has implemented this. This was brought to my attention first from a Test262 test I think by Safari engineers, thank you very much for that. Firefox has shipped as well or maybe not yet shipped but—I guessed shipped by this point February 4th. And this is not checked off, but they do have editorial reviews for the actual PR. So with that, I’ll go to the queue before asking for Stage 4. + +DLM: We support this as well. + +RPR: Anyone else on the queue? All messages of support? Or objections? I think that’s about it. SYG You can ask for consensus. + +SYG: Yes. Could I please get Stage 4? + +RPR: KM is plus one. There are no objections, so congratulations you have Stage 4. + +SYG: All right, thank you. + +### Speaker's Summary of Key Points + +* Recapped the existing spec behavior (global vars conflict with global lexical bindings) and the proposed change (global lexical bindings allowed to shadow) +* All 3 browser engines have shipped the proposed behavior + +### Conclusion + +* Stage 4 + +## RegExp Escaping for stage 4 + +Presenter: Jordan Harband (JHD) + +* [proposal](https://github.com/tc39/proposal-regex-escaping) +* [spec PR](https://github.com/tc39/ecma262/pull/3382) +* no slides + +JHD: `RegExp.escape`. Here is the spec. Somewhere in here is approved spec PR and we have a bunch of implementations. Firefox has shipped it and Safari shipped it and two polyfills. I believe implemented in Chrome but not released it. SYG can probably confirm that. + +SYG: Do you want me to confirm right now? + +JHD: Whenever. And then, yeah, it’s met all the various requirements for Stage 4. So I guess SYG wanted to add your context. + +SYG: I think this one, my bad, kind of fell through the cracks. This is implemented and staged now. And should be ready to go in 135 or 136. Either April 1st or April 1st plus four weeks. + +JHD: So although certainly preferred it to have landed in Chrome first, I’m not worried about web compatibility risks here. Given there’s two of the three browsers shipped it, I would like to ask for Stage 4. + +RPR: You have support from DLM. No objections. Plus one from DE and no objections. Congratulations, you have Stage 4. + +JHD: Thank you. + +### Speaker's Summary of Key Points + +2 browsers, 2 polyfills, 3rd browser implemented and will ship in April All criteria met + +### Conclusion + +* stage 4 + +## import defer for Stage 3 + +Presenter: Nicolo Ribaudo (NRO) + +* [proposal](https://github.com/tc39/proposal-defer-import-eval/) +* [slides](https://docs.google.com/presentation/d/1LjsJhdTIP3wgo1odtVa-qbfyGU5M1W9YMm0AtKnJJKk/) + +NRO: There have been no changes since last meeting and normative changes. A few tweaks following the editorial reviews. We have test coverage and all Test262 tests have been merged. We have thanks to a colleague of mine WebKit implementation passing the tests, so at least we know that the tests are not wrong. There are failures if you look at—that’s due to WebKit problems and not to the test. They’re known bugs and they’re in the process of being fixed. + +NRO: We have implementations in tools: Babel and prettier already supported and work in progress TypeScript implementation. If anyone wants to help, an Acorn plugin would be welcome. It unlocks syntax support for webpack and rollup and a bunch of others. That’s it. Just before consensus, I want to ask the editors if we have their blessing. We talked about how to go and got official approval of GitHub from part of the group but not from all of it. + +KG: Yeah. + +NRO: Thank you. Then do we have consensus for Stage 3? + +DLM: We support this. We’re quite interested in being able to use this in our internal code, so thank you. + +NRO: Any objections? + +DE: CDA is plus one on the queue. + +CDA: I don’t need to speak but support Stage 3. + +NRO: Thank you Chris. I think we have consensus. The plan now—the next steps is to—I will open the request in the 262 repository and just waiting for the import data request first and that’s it. Thank you everybody. + +CDA: Just noting for the record that DE also supports stage 3. + +### Speaker's Summary of Key Points + +* No normative changes since last meeting, only some editorial tweaks +* All tests262 tests have been merged, see https://github.com/tc39/test262/issues/4215 +* Wip WebKit implementation to validate the tests +* Tools implementation in progress, would appreciate help with an acorn plugin for the proposal + +### Conclusion + +* Consensus for Stage 3 + +## Explicit Resource Needs Consensus PR + +Presenter: Ron Buckton (RBN) + +* [PR](https://github.com/rbuckton/ecma262/pull/13) +* [proposal](https://github.com/tc39/proposal-explicit-resource-management) +* no slides + +RBN: So the only thing that I wanted to discuss today here is that there was an issue that was posted for explicit resource management that the spec text was currently missing the definitions for the constructor prop on A sync disposable stack and there is PR for the Ecma script specification that defines those as intended to be defined. I expect this is proforma of something not intentionally excluded. So I’d just like to ask for consensus for this change. + +RPR: Just pulling up the queue. So SYG. + +SYG: I support this. I think it’s clearly a spec bug. + +RPR: Thank you. Kevin is also plus one with end of message. Michael is also plus one. + +RBN: I’ll wait and see if there’s any objections. + +RBN: Thank you very much. + +### Speaker's Summary of Key Points + +* PR addresses missing definitions for the `constructor` property on `DisposableStack.prototype` and `AsyncDisposableStack.prototype` + +### Conclusion + +* Consensus reached + +## Temporal normative PR and status update + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-temporal) +* [slides](http://ptomato.name/talks/tc39-2025-02/#8) +* [PR](https://github.com/tc39/proposal-temporal/pull/3054) + +PFC: My name is Philip. I work for Igalia and presenting this work in partnership with Bloomberg. I’m sure that those of you who are returning have seen many Temporal presentations. This one should be quick. Progress update is that we are continuing to get closer and closer to the required two full implementations, close to done, and we've been cleaning up the issue tracker. In the meantime several requests for editorial changes have come in from implementations which we have incorporated. In time we’ll continue to analyze code coverage metrics to make sure that we have complete Test262 coverage for gaps that we might have missed and to answer any questions raised. There’s a lot of questions coming in now because the Mozilla Hacks blog did an article with Temporal being switched on by default in Firefox Nightly. There’s a surge in interest in Temporal and we are getting a lot of questions from people who would like to use the proposal. This is good. It’s fun to see all the questions coming in. Please do go ahead with your implementations and ship them unflagged when they’re ready. If something is preventing you from doing that, please let us know as soon as possible. + +PFC: The proposal champions meeting is biweekly on Thursdays at 8:00 pacific time and it is open if you want to join, please join. If you want to talk to us but can’t make it at that time, we can find another time to meet. + +PFC: So I mentioned it’s shipped in Firefox Nightly. This is quite exciting for us. It means that people are using it in the wild. There is now full documentation for the proposal on MDN, this is a long time coming. I think we started it [the documentation] three or four years ago. But it is now there. There’s a compatibility table that I hope will get updated as implementations near completeness. + +PFC: I do one of these graphs every time. Apparently people like them. So I do want to be clear that the percentage of test conformance does not mean percent done, just to say that upfront. But SpiderMonkey is close to 100% conformance. A handful of tests are not passing yet. But less than half a percent. Ladybird, previously known as the SerenityOS engine (LibJS) is quite an improvement since last time and at 97%. And GraalJS is up there as well. And V8 and Boa and JavaScriptCore are lagging a bit. I got word from one of the maintainers of Boa that they actually increased a couple of percentage points since I made this graph. But I didn’t have time to go through and retest everything. So this is as of a couple of weeks ago. And JavaScriptCore I’m happy to say that one of my coworkers from Igalia, Tim Chevalier, is looking to land additional patches for JavaScriptCore to get the percentage up. Keep an eye on this space. And hopefully next time, “number go up”. + +PFC: We have one bug fix to present this time that requires consensus. So this change was requested by André (ABL) who is working on the Firefox implementation. The ISO 8601 calendar is a standardized machine calendar and remains unchanged arbitrarily far into the future. We don’t support dates that are outside the range of what JavaScript Date supports. However, you can create a `Temporal.PlainMonthDay` from a string that is outside of that range: the year can just be ignored. PlainMonthDay objects are—in the first line of this code example here, you can see, you get a PlainMonthDay of January 1st even though the year is out of range. However, for human calendars that are not ISO 8601, this places a burden on the implementation that is unreasonable because you have to be able to find out what the date is in the human calendar for the date in the ISO calendar. For example, for the Chinese calendar which has lunar years, a function call like this would require the implementation to calculate a million lunar years into the future. That is well outside the date range and the answer would be nonsensical anyway because lunar calculations are not that exact that far in the future. + +PFC: We propose to continue allowing this for the machine-defined ISO 8601 calendar, but throw the RangeError in the case of any other human calendar. So I would like to ask for consensus for that normative change to the proposal. And I’ll also handle any questions at this time. + +SYG: I wanted to confirm on the percentage of test passing slide Boa is the same as temporal-rs? + +PFC: I think I would say that temporal-rs is the library that Boa uses. + +SYG: What I mean if you add another Y axis for temporal-rs is that the same number as boa? + +PFC: I would say that doesn’t apply. Temporal-rs is not JavaScript so can’t run Test262 tests. + +SYG: I think you know what I’m getting at. Sounds like temporal-rs is the same as Boa. + +PFC: Yes. I don’t know enough about the connection between the two to say if V8 were to incorporate temporal-rs I don’t know if the percentages would go in lock step. I don’t know enough about the connection between the two. + +LCA: I think there’s a significant amount of code that sits in-between temporal-rs and the engine that does JavaScript values into web objects. I think all of the tests related to that would not be captured by this comparison. + +SYG: It makes sense. + +LCA: It underlying—like, operations may be correct but there’s still a lot of variance in those transforms. + +SYG: Got it, thanks. + +DLM: Sorry, just wanted to express support for the normative PR and not surprising since we requested it. Thank you. + +MLS: So I like to throw a RangeError, what is the algorithm for completing that? Do you take the human readable and see that you have something in the data that can resolve to or how is it computed? + +PFC: I will just put it up on the screen. So the change is that we treat the ISO 8601 calendar separately and if you get to this point, it’s a human calendar. And then we check the date that you gave in the string that is in the ISO calendar. And if that’s within the limits that we accept for any other Temporal object like plain date, it’s fine. If it’s outside of those limits, we throw a RangeError. It’s 10^8 days before or after the 1970 epoch. + +SFC: I was just wondering if you could reiterate why the normative PR special cases the ISO 8601 calendar is doing the behavior across the board including in-line one. + +PFC: Because the ISO calendar is fully specified. It will change maybe the case that, for example, the Gregorian calendar adds an extra day to account for planetary rotation speed a thousand years in the future, I don’t know. + +SFC: I agree that the first can be implemented. Wondering why it seems that it is consistent but it’s not wrong. I think it is the right call to do it consistently for all the non-ISO 8601 calendars but this is just a case where it’s not clear to me why there’s a difference in behavior. I mean, I agree there can be a difference in behavior. But I’m not sure why that was so. Was this a proposal to like make the changes as minimal as possible? + +PFC: I don’t remember off the top of my head why we decided to make that exception. I assume it was making the changes as minimal as possible. + +DE: I’m very happy to see multiple implementations and this proposal being complete in its definition, modulo a bunch of very minor bugs that are being discovered. I’m wondering is Firefox planning on shipping this beyond nightly soon? This is a question for Daniel. + +DLM: Yeah, sure. So just to clarify, so the current stage is that it’s built in nightly but disabled behind a pref. A couple days ago I landed the change to flip that and now it’s enabled on nightly. If that goes well, we hope to ship it. Hopefully sooner than that, but it might be a few months. + +DE: That’s great, thanks. + +CDA: That’s it for the queue. + +PFC: Sounds like there are no objections to consensus on the change and no more questions. Thank you. + +RPR: Thank you Phillip. Do you want to do a summary of what was discussed? + +PFC: I have a proposed summary up here that I will paste into the notes and add any points that were discussed. + +RPR: Thank you. You’re very well prepared to make sure we have excellent notes. All right. Let’s move to your next topic. Status update on ShadowRealm. + +### Speaker's summary of key points + +With Firefox Nightly shipping the proposal and MDN adding documentation for it, there is a surge of interest in Temporal. + +Implementations should complete work on the proposal and ship it, and let the champions know ASAP if anything is blocking or complicating that. You are welcome to join the champions meetings. + +A normative change was adopted, to avoid requiring questionable calculations when creating PlainMonthDays in non-ISO calendars outside the supported PlainDate range (PR [#3054](https://github.com/tc39/proposal-temporal/pull/3054)). + +## ShadowRealm Status Update + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-shadowrealm) +* [slides](http://ptomato.name/talks/tc39-2025-02/#1) + +PFC: This work was in partnership with Salesforce. Expecting that the meeting would be full, I kept the recap of what is ShadowRealm very short. So if you want to know more or know more about the use cases, please come talk to me later or maybe if there’s time on Thursday and folks would like it, I could prepare a short presentation on that. If you’re interested, ask me. So the short recap. + +PFC: ShadowRealm is a mechanism by which you can execute JavaScript code within the context of a new global object or new set of built-ins. The goal is integrity. That means complete control over the execution environment and making sure that nothing else can overwrite your global properties or define things that you don’t expect. There’s a whole taxonomy of security, and so that’s why I don’t like to say the goal is security, because it can mean a number of different things. So that’s why the goal is integrity. This is what ChatGPT thinks the inside of the ShadowRealm looks like. Mysterious and intimidating figures embodying the realm's eerie essence. + +PFC: I talked about ShadowRealm previously in the December meeting. The big question at the time was which Web APIs are present inside ShadowRealm? I’m happy to say the W3C TAG adopted [a new design principle](https://www.w3.org/TR/design-principles/#expose-everywhere) that new API should be exposed everywhere. This includes other environments, not just ShadowRealm but environments like AudioWorklets and ServiceWorkers and such things. Those were previously all enumerated manually with an annotation in WebIDL, and now you can say this API is so fundamental and should be exposed everywhere there’s JavaScript. These are APIs like TextEncoder that maybe could have been part of the JavaScript standard library but aren’t. If you are very curious, I have a spreadsheet with the full list of over 1300 global properties on various global scopes which ones are exposed everywhere and which ones are not exposed everywhere and why. You can follow the link in the slides. + +PFC: Some of the things I said I would follow up from last meeting, KG asked about `crypto.subtle`. Initially I had a pull request to have `crypto.subtle` exposed everywhere and looked like it would succeed. And I found out from the crypto maintainers the way it is specified depends on an event loop. And one of the design principles is things that depend on the event loop can’t be exposed everywhere because not all environments have an event loop. I think we’ll leave it out for now. Hopefully they will be able to redefine it, in case you’re hoping, to not depend on the event loop and at that point could be exposed everywhere. I had a whole list of web platform tests for the web APIs present inside of ShadowRealm. Some of those still need reviews. If you are in implementation and interested in looking at those, I would much appreciate that. + +PFC: What we’re working on now, so last time in December we had a discussion about getting buy-in from browser’s DOM teams, and how we might be ready to go to Stage 3 in the proposal in TC39, but it shouldn’t happen without the buy-in. So we had some questions about use cases from that area. So we would like to shore up how convincing the use cases are. You know, we want to show that as TC39, we are excited about this and glad that it has HTML integration and that would be useful for end users of the web. So if you have a use case for ShadowRealm that you don’t mind sharing, please come talk to me in the hallway sometime during this meeting, I would be really interested to hear it. And if you’re okay with it, I will try to write something up that kind of expresses how this benefits your end users. So please come talk to me. That’s it for now. Any questions? + +RPR: There’s nothing on the queue. + +KG: Sorry. For `crypto.subtle`, does the fact it is not included on the basis of using the event loop mean no async APIs inside of ShadowRealms? + +PFC: It doesn’t mean that. Most async APIs are defined in a way that they don’t require the event loop. We don’t have the event loop in ECMA262 but `Promise.resolve` still works. + +KG: `Promise.resolve` is only sort of an async API. Most of the stuff is punted to the host and most host and queue or whatever and tells the host to get back to it. + +PFC: I would say this is a problem that most async APIs don’t have, but they defined it in this way in the web crypto spec and apparently it is observable, and they could change that before they say it doesn’t depend on the event loop. + +KG: I would be surprised to learn it is observable. But async tasks just get completed in various points in the future. And theoretically they can take any length of time. It would be nice to see that it observable depends on the event loop in a way that is succinct from any other API and I would be concerned we can never have any async APIs. Maybe that would be okay but it’s a little worrying. If it is just the details of how the crypto spec is written, yeah, okay we can make try to fix up the crypto spec although it is largely unmaintained. I don’t know if that is going to happen. It had been getting more. Maybe we’ll get there. + +PFC: I don’t think other async APIs have the problem. I think it’s the detail the way that the crypto spec is written. + +KG: Okay. + +DE: I’m trying to understand is there any particular choice of that including or excluding APIs that you disagree with? Is it just about crypto or – + +KG: I think crypto is the main one that I would like to see included in the sense of it is generically useful. If there was—I appreciate the reasoning for not doing so. We talked about a couple of others at the last meeting and most of the things—like, I can imagine wanting to use the Web Codecs, for example, that in some case are purely computational and makes sense that you want to say no. This will probably involve hardware that we don’t necessarily want to invoke, and in the ShadowRealm that makes sense. I would generally personally be very permissive what purely computational means and could in principle be implemented in JavaScript or WebAssembly for example, I would put it here. And that would include all of the crypto APIs and all of the media codecs and everything. I understand why we’re not doing those. I don’t want to continue pushing on this. My hope is that we can get crypto specifically included in the future, because it is extremely generally useful. + +KM: Some feedback from people and talking to people about this and I don’t have time to write anything up formally but give the feedback here. Yeah, I think the use cases did seem like it was a thing that we were push back from talking to people on. It seemed like from just talking with—I think I mostly just talked with the bun folks but didn’t seem they weren’t super big on it. They weren’t super in need of it or use it. And seemed like the question I always got is sort of like why can’t you do this with the iframe and have some part of like a tool that just automatically collects all the IDLs and scripts out all the names that you don’t want from the Iframe. And then also seemed like it was another feedback I got from people is it was kind of like—it was a lot of ongoing work throughout the web platform like in terms of everybody who is writing any web specs needs to consider this and so that seemed like it was kind of pretty cross-cutting and ongoing spec/maintenance work that really want to see the use cases for that before we want to commit to that basically. + +PFC: Okay. The use cases I can hope to have a larger presentation on that soon, like I said. The first question about why can’t you use an iframe? So if you use a sandboxed iframe only asynchronous communication is possible. So you cannot emulate the convenient synchronous communication between main realm and ShadowRealm in that way. If you use the non-sandboxed iframe, you can’t go in and delete any property you don’t want because `window.top` is unforgeable and you will always have free communication with the main realm. + +?: Thanks for the feedback on the last one there. + +JSL: Just pointing out on some of the async operations and web crypto right now, there’s streaming support being added of async iterators are operators like `digest.node` that might make it difficult to eliminate event loop. We’ll see how that evolves. Something to be aware of. + +PFC: Thanks. + +LCA: I have a response to that. I don’t see it all how this is different from like ReadableStream, for example, like the fact that `ReadableStream` is exposed but `crypto.subtle` is not. + +JSL: It should be fine. But something to be aware of. + +RPR: We’re at the end of the queue. + +PFC: That’s it. + +DE: I just want to underscore what PFC and KM already said, use cases are very important. Implementations have already been made. The only reason they’re not shipping is for lack of use cases. For a long time, the lack of web implementation was the blocker. And now it’s purely lack of use cases. So anyone in committee who wants to use ShadowRealms, please please communicate the use cases. + +SYG: I wouldn’t say implementations are already done for the HTML integration part. It is true that I think implementations are already done for the pure JS part. But for Chrome, it’s not the case that all the APIs that we want in this—suppose the list that PFC has is the final list. I don’t think it’s true that that work is done. + +DE: Apologies, thanks for the correction. The thing that’s blocking the implementation work there is the use cases, right? Previously there was – + +SYG: Exactly right. + +DE: Previously was the HTML design work that Phillip has done a good job completing. + +SYG: I think as part of asking for that work, the feedback has been—I want to echo what KM was saying. Use cases are important. It has shown to be much more cross-cutting than I thought in terms of the maintenance cost, so, yeah, the use cases kind of weighed against the maintenance cost is the deciding factor here. + +DE: In particular, it’s the maintenance cost on supporting the web APIs on the ShadowRealm global. + +SYG: It’s the cognitive burden that every API current and future has to consider ShadowRealm as a new kind of global. Is that what you meant? + +DE: Sure, maybe. + +MAH: I’m confused a little bit by the request for use cases, because my understanding is the champions and others have expressed use cases building libraries for executing coding in virtual environment that in other—those use cases have been expressed. How is that not sufficient? + +PFC: I mentioned before particularly WHATWG environment likes to see use cases developed how does it benefit the end user of the web? I think you’re absolutely right that use cases such as running code in the virtualized environment have been expressed. I think we need to kind of step up how we communicate that to these other groups and express it more in terms of what benefit is there for the end user? + +MAH: The benefit for the end user, which end user? The users using the libraries and users using actually those libraries or developers using directly the APIs? Do we ask now that an API being added needs to be targeted towards the mass audience of developers, or is it okay to have some APIs that are only useful for few developers that will build the libraries that will be ultimately used by other developers? + +PFC: I mean, I’m interpreting it to mean, what can you build for end users of apps that would use ShadowRealm internally that you couldn’t build without ShadowRealm? I think that’s a reasonable question to ask and I'll try to answer it as well as possible. + +RPR: I want to point out we’re one to two minutes for lunch. We should have a hard stop. Other people in the queue to get to? + +KM: I think the key thing here in some ways for us is if it was a one off thing that we just did this once, it would be probably an easier pill to swallow. The concerns that I got it’s like everyone designing any spec going forward needs to consider this. So it’s like just an extra little bit for everything going forward for developers that don’t like—maybe somewhat new to the web platform writing specs for whatever. They’re an expert in some other area and need to understand another bit of intricacies of the web platform when exposing their APIs. That was kind of the feedback that I got, not quite as much about the current ones as it is about future work ongoing forever. + +RPR: Thanks. I think we should—there is spare capacity for a following item that you wish to continue this on Thursday. Have a think about that. For the note takers on that. Can we capture the queue. Phillip do you want to say a summary of where we got to? + +| Speaker Queue | +|:--------------------------------------------------------------------------------------------------------------------------------------:| +| Users = web browser users; why ShadowRealms is a bit special Shu-yu Guo (@google) | +| Consider topic (on Thursday?) going into details on why the Salesforce/Agoric use cases aren't persuasive Daniel Ehrenberg (Bloomberg) | + +PFC: Sure. I guess presented this status update on ShadowRealm proposal where we are primarily focused on describing use cases in terms of end users of the web and we would be happy to hear your use cases if you have them and we’ll come back in a future meeting with another update. + +RPR: Thank you Phillip. All right. That brings us to the break, to lunch. I will note because we pulled certain things forward, that means the afternoon schedule has become rearranged. So please do check that out. There are items there. We will resume at 1:00 p.m. and lunch is happening. We have sandwiches over there. Check if anything more? I think we’re good. And then also likewise if anyone has any feedback or physical temperature feedback, please let me or Michael know. Please enjoy your lunch. + +### Speaker's summary of key points + +We presented a status update on the ShadowRealm proposal. We are primarily focused on describing use cases in terms of end users of the web. We'd be happy to hear your use cases if you have them and we’ll come back in a future meeting with another update. + +We discussed the designation of `crypto.subtle` as not exposed everywhere, and whether it could be exposed everywhere in the future, and what it means for use cases to be described in terms of end users. + +## Decorators implementation updates + +Presenter: Kristen Maevyn Hewell Garrett (KHG) + +* [proposal](https://github.com/tc39/proposal-decorators) +* [slides](https://slides.com/pzuraq/decorators-for-stage-3-2022-03-977778) + +KHG: So, yeah, quick update on decorators implementation. Everybody’s favorite proposal back again. Okay, before we get started, well, basically just wanted to give a quick overview of, like, a refresh of what decorators are about and talk about the status of the implementation and some of the things that have come up. + +KHG: So refresher, decorators are functions that have forming capabilities when applied to classes or class laments, and that is replacement, so being able to replace the value that is being decorated with one that is similar, so the same general shape, replace a method with a method, a class with a class, an accessor with an accessor. + +KHG: So the second capability is initialization. That’s being able to initialize the value with per instance with a different potential value, so with methods, you can do things like bind methods with, you know, accessors and class fields or auto accessors in class fields. You can assign the to fault value or intercept the default value and so on. And then next is metadata, so being able to associate some extra information, for instance, type information or serialization information, with the value. And lastly, access. Access is being able to do things like get and set the value out of bounds, so you can do that with private values, with public values, and that can be a way to, for instance, add, like, a serialization layer that can do things like access private values or, you know, test helper methods and what not or, like friend methods that can do that in some way. + +KHG: And some common use cases for these are things like validation libraries and dynamic type systems, being able to, you know, annotate things and say this is a string or this is a number, and having that actually work at run time, not just at compile time. ORMs declarative data structures Ike serializers, moods and what not, reactivity libraries like mob ex, like I mentioned before, method binding, that’s a very common one. Debugging tools, like things like being able to add a deprecated decorate that are will log when a value is used and it’s meant to be deprecated or, you know, being able to log whatever function is called or send an event or what not. And dependency injection, so if you need to annotate a class to say here is the things I need. + +KHG: And then real quick, because this comes up a lot, it’s why are we starting with classes? Because function decorators are also something. They’re not part of this proposal, but they’re something people have wanted a lot and arguably would be simpler to implement. It would be a smaller spec and all that. And why do we need these at all? + +KHG: So first often, when it comes to function decorators, today it is possible to use a decorator pattern without using syntax for functions. You can create a function that receives a function and returns a decorated function, and it’s very declarative, it’s easy to understand, it’s performance overall. There’s really not much downside with the exception of the name here, memoizedFunc, would not get applied to this function. If you’re trying to debug it, it gets a little annoying. But that’s the only real issue with function decorators at the moment. + +KHG: When it comes to classes, we don’t really have that same capability. So, for instance, if you wanted to create a memoized method, this would create a new method, an enclosure per instance of the class, and that might not be what you want. You might want to decorate the prototype. To decorate the prototype, you would have to do that either using a static block or imperatively after the class definition, and this is where it can get really complicated. I think one of the main benefit of classes over prototypes was the fact they’re a lot more predictable. I used to see code before class syntax that would do things like conditionally add a method to a prototype or something. And sometimes maybe that makes some sense, like, if you want to debug only method or a debug version of a method, but in general, it was very confusing and hard to read. So decorators really simplify this whole thing and make it a lot easier overall and more idiomatic and what not. + +KHG: So, yeah, community interest also remains really high. It is the second most anticipated feature in the 2024 State of JS survey. Anecdotally we received tons of feedback that it’s looking really good and people are really enjoying and it using it well. It’s one of the most widely used syntax additions overall. And, yeah, I think it’s very much anticipated. + +KHG: And then implementation status. So we have shipped transforms in TypeScript and Babel, and those have been widely adopted by the community, with some exceptions for people who are waiting on metadata or on parameter decorators, because that was something that the older legacy TypeScript decorators had as well. Tests have been written for test262. I have not been able to merge them, get them merged, because I have been very, very busy with job things. So -- but the tests themselves are comprehensive. They cover every edge cases and corner cases, and that we have found so far at least. And I do think all that they really need is a rebase and they’re good to go. And then Edge is currently nearing completion with the implementation in V8, SpiderMonkey is around 75% complete, and we have another -- a number of proposals that are awaiting completion to move forward. They’re kind of in a holding pattern, parameter decorators, function decorate and grouped accessors being some of them. + +KHG: And, yeah, what we have heard so far as we’re kind of approaching completion is several implementers have been expressing some hesitation to being the first to ship decorators, and so it’s kind of a -- a little bit of a standstill at the moment, and we wanted to take some time to discuss those concerns at plenary and see, yeah, just dig in a little bit. So that’s pretty much where things are at. Yeah. + +NRO: So there are multiple implementations of decorators, like, as KHG mentioned, there’s one in Babel and one, like, Edge team (?). The problem is we don’t really have tests, at least we’re not running tests because they’re not merged yet. So if, like, I’m going to see whether or not we’re going to try to do it for Babel, and please also native implementations do it. Do run the tests in the request. I know for maintainers it’s huge to review PRs for the maintainers, so, please, we can catch potential problems in the tests by running them and see what’s failing in our implementations. + +USA: Next on the queue we have DE. Oh, sorry, there’s a reply by PFC. + +PFC: As far as the test262 PR goes, I think the only thing blocking it right now from being reviewed is some of the generated files are missing their corresponding source files. If you have time to add those, then we can -- like, what we’ve been doing with large PRs is splitting them up, so we can try to do that with this, and hopefully merge them into the main tree a bit faster. + +DE: It was mentioned in the presentation that there’s, I guess, a complete implementation in V8 out for review, and partial implementation in SpiderMonkey behind a flag. Can we discuss those more? Like, could we hear from the Edge team what the -- what your implementation status is, where that is. + +LFP: There is implementation that we submitted to Chromium, and it’s currently waiting for review. + +SKI: Yes. We have been implementing it as Luis said, and while we generally are in sync with upstream v8 team about features we’re implementing, we are currently waiting for review of this work. We want to resolve issues that Kristen raised in this plenary, like, in an open discussion in TC39 to understand the concerns of all the other engines and other stakeholders for the decorators proposal. + +DE: Okay, great do, we have anybody here from these engines who could speak to those concerns? Shu, are you on the call, DLM? + +SYG: I’m here. What would you like to -- sorry, what was the question? + +DE: So are you considering reviewing the patches that the Edge folks made for decorators? If not, is there a reason why not? + +SYG: It’s currently not prioritized. We also have reluctance to be the first movers to ship decorators here. + +DE: Why is it not prioritized? + +SYG: Because we would like to not be the first to ship it. + +DE: Okay. DLM, do you have any thoughts on this? + +DLM: Sure, I can provide a bit of an update. So I was working on decorators up until about a year and a half ago or so. At that time I stopped my work because I had higher priority things to work on and, yeah, it just hasn’t become a priority for us again since then. So our implementation is paused for now. + +DE: Is there anything that either of you could say about how you determine the priority of these things? + +USA: There is a reply by ML on the queue. + +MLS: Yeah, so we’re -- I think we’re in a similar boat. We don’t -- A, we sort of don’t want to be the first to ship this, and B, we don’t view it as a high priority given other priorities we have, having to deal with performance security and other features we’re implementing. It’s a large feature to implement, and it will take a good amount of time, I would think, to do it. + +DE: So maybe we can discuss how browsers prioritize features so we can understand why other things were prioritized and this one wasn’t. I mean, overall, it would be really useful to get input from browsers on how we in TC39 should prioritize our work so that we’re aligned with, you know, what will make sense for browsers’ priorities. + +SKI: So, yeah, as KHG shared, decorators is a popular feature among developers. The bug for the implementation of decorators has about 78 votes, and we were wondering if any data on the ground, like, any surveys or implementation and usage experience would help. Is there any data that could be collected that would help, you know, align decisions, like, increasing the priority for implementation -- I mean, how do we get out of this deadlock? + +DE: Can I request–even if implementers, even if the three browsers don’t have anything to say now, maybe you could come back at a future meeting and give us more clarity on how you determined the prioritization, what data you might find interesting, whether you’d like the proposal to be withdrawn. It’s just very hard to interpret the signals. It would be really helpful and productive for this committee if we could get more clarity from the three browsers. + +KHG: Yeah, it also, just to climb in, I haven’t had a lot of time the dedicate to this since I left LinkedIn several years ago, and I’ve been, like, putting in spare hours where I can find them to keep everything updated as much as I can. But, you know, I think that lack of clarity has been really hard to deal with, because it’s -- it feels kind of arbitrary and also it feels like a really high bar to say that, you know, we have to not be the first one to ship a feature. That can just turn into, like, you know, a never ending stalemate. And it’s not like we’re saying, you know, you are have to also implement the feature, because that’s -- the implementation is already there. It’s just shipping it. So it really -- yeah, I guess just I put five years of my life into this now, and on and off, obviously, but I’d really like to see it get over the line. Yeah. + +MLS: So response to DE, I’m not at liberty to talk about how we set our priorities. There’s all kinds of things to figure into that. Certainly what’s being standardized, we have performance, we have security mitigations, we have thing that are coming down our hardware pipeline that we need to do development for, so, yeah, I can’t -- you know, I can’t tell you what our priority is for certain things, and, you know, you have end things and you have to draw a cut line some place based on the priority of the current development cycle. + +SFC: Yeah, I mean, you know, when this body advances, the ones we advance are largely the one my team determined are important to our uses, our clients at Google, and we also put in the work. And my team has been putting a lot of time into Temporal proposal because that’s important to our users, which are, you know, users of internationalization libraries, users, you know -- developers trying to build internationalized apps. And, like, that’s how that happens, at least for proposals. I can’t speak for other proposals that I’m not familiar with the users and the clients. But, like, that’s probably a good place to -- I just want to sort of draw that problem and be like, Intl proposals tend to get implemented pretty quickly, and that’s the reason at least on my side, because my team is implementing them. And I’m not the V8 team. + +KHG: So, yeah, the -- I think the -- if the -- if it really was just like, oh, we haven’t had a chance to review it or it just hasn’t been prioritized or question don’t have bandwidth to implement, that’s understandable, totally. And we all have our priorities and we’re all trying to get things done. I think it’s more about, like, we have an implementation ready to go, and it’s just not moving forward because it families like it’s being gate kept a bit, I guess. + +DE: Will we hear further feedback from SpiderMonkey or V8 about your prioritization? Because it would be really great and useful to understand, I mean, as the Edge team was saying, whether any data would be relevant for you that we could collect, or whether the browsers don’t want this proposal to proceed or anything more. + +[a long period of silence] + +DE: Well, I hope that in the future, we can be in touch about this. Historically, when we bring something to Stage 3, the assumption has been that’s because as a group, we are prioritizing it, to some extent. I hope that in the future, people can block Stage 3 if they really see proposals as very low priority to implement. I was expecting that Stage 3 would be a sufficiently positive signal. Increasing clarity here in the future would be really good, with respect to this proposal and with respect to future proposals as they’re proposed for Stage 3. + +USA: Kristen, would you like to say any concluding remarks? + +KHG: No I think that’s it. + +### Speaker's Summary of Key Points + +* Decorators is a well received and highly anticipated JavaScript feature. +* Lots of use cases, lots of good feedback overall +* Implementations (V8 and SpiderMonkey) are nearing completion +* No web engine wants to ship first. + +### Conclusion + +* Status quo remains the same, no one plans to ship currently. +* No browser was willing to explain the reason for their deprioritization. + +## Curtailing the power of "Thenables" for Stage 1 + +Presenter: Matthew Gaudet (MG) + +* [proposal](https://github.com/mgaudet/proposal-thennable-curtailment) +* [slides](https://docs.google.com/presentation/d/1Sny2xC5ZvZPuaDw3TwqOM4mj7W6NZmR-6AMdpskBE-M/edit#slide=id.p) + +MG: I want to talk about thenables, and I want to make thenables less powerful. So what are thenables? So this would be an object that has a then property, and so objects that have then properties are treated specially in promise code. And the why of this comes from before my time on TC39, but basically my understanding is that pre-standard promise libraries used to support this sort behavior, and there was a desire to make he’s these things compatible and harmonious, a very noble goal. + +MG: Okay, so what’s the problem? So this is something that I have seen multiple times now. And on multiple teams, and so I want to talk about this. And the problem is that it’s very easy for implementers, particularly in web engines, but I suspect this sort of thing can pop up elsewhere, to totally forget that this exists. It’s the kind of behavior that is subtle if you don’t run into it very often and if you’re not, like, having it rubbed in your face, you can forget about it pretty easily. And so you can accidentally create cases where user code can get executed with where you kind of never expected that to be possible. + +MG: And so an example that I write up here is, you know, we have this web IDL, which is an interface description language for the web. And you can define a dictionary, which is just like a bag of data. And, you know, these things get code generated into nice, like, C++ structures so we can work on them in the C++ side, and they’re great. And then there is a nice, beautiful translation system that translates them into JavaScript objects and back. And cool, everything’s nice and lovely, and so you have one of these C++ structures, and you go, you know, the spec says to resolve a promise with this, so you just go, you know, your C++ version of a `Promise.resolve` on this object, and you never think about whether or not, like, code will actually get executed in script at all, because why would you, because you’re just resolving this C++ thing. And the problem here being that dictionaries, they convert to abouts with `Object.prototype` as their prototype. So when you do that translation from C++ object to JavaScript object, you go C++ to JavaScript, now it’s going to prototype. + +MG: Oh, look, what the somebody put a then property on `Object.prototype`. Accidental user code execution. Something happened that you didn’t expect. Okay, so this is actually -- has actually been something that happened again and again. I didn’t even look that hard to generate this list, and to be honest, I didn’t even both to look at WebKit. There could be WebKit bugs similar to this that I didn’t even try looking for. And this includes, we’ve even had one of these on the spec, or the spec CVE from last year was basically this kind of problem. + +MG: So what do we -- step 1 -- or my Stage 1 ask here is ultimately can we do something about this? And I come hoping that the answer is yes. And I want to present a little bit of some of the design space I see for options here. But the actual ask here is the Stage 1 ask, which is: Do we agree that there’s a problem here, and do we think that there exists a potential solution. Okay? + +MG: So when we were dealing with the spec, CVE, one of the problems -- or one of the proposed things that would happen is we would fix the problem directly, but also pursue a couple of mitigations. Some of the mitigations that came up with were okay, what if we made if object prototype an exotic object and we make it exotically reject object properties, so you clang (?) the defineOwnProperty on object prototype so it silently know ops. Another option was to make some promise resolution functions not respect thenables. It was not super clear which ones we could do, and I think that this would be little bit challenging for an audit. But it does suggest that there is at least some ability for us to address this and there might be some appetite in committee to do this. + +MG: I did want to come with a third proposal. There we go. Pause I’ve been thinking about this for a while. And I’ve been trying to figure out what is a nice answer to this look like. So the third proposal that I would suggest looks something like there. Specification defined prototypes, so this would be like Math and Error and Array and `Object.prototype`, get a new internal slot, and you call them internal proto. So objects that have internal proto, they’re exactly the same as any other object, but we will add then a new abstraction that pass attention (?) to this internal slot. You add that abstract operation, get non-internal, and this get non-internal does the prototype chain walk that you would expect for get, but as soon as it sees that the object that it is going to look at as the internal protoflag will stop and, you know, will just return undefined at that point. We then replace the promise resolution machinery that looks for then in the prototype and say, use this new abstract operation called get non-internal. + +MG: This is nice. It addresses some of these bugs. It fixes some of them. Like, it mitigates the challenges. There’s some advantages. I think it’s a little bit of a more harmonious design than turning `Object.prototype` into some exotic project. As an engine implementers like this because I don’t want it to be in the exotic object. It can also be integrated into WebIDL. So we could change the web IDL spec to say that IDL defined prototypes and classes get this non-internal flag. And, yeah, sorry, it also avoids making `Object.prototype` exotic. + +MG: Is it perfect? Of course not. No, this is a mitigation and doesn’t really fix the whole class of problems of thenables. And in fact, on the write support of the proposal, you’ll see it definitely will address some of them and definitely does not address others. + +MG: Now, I didn’t want to come with zero data. Because I did want to know, like, how likely is it that this could be compatible? And unfortunately, I goofed a little bit when I did this telemetry. So it doesn’t quite answer the question that I was hoping to answer entirely. But what I have is I added telemetry to the thenable paths in Firefox. And the telemetry really collects three bits of data, and it says, okay, did you ever on a page use a then prototype, like, did you resolve an object by going down the path of calling then. Did you resolve the path -- the second bit of data that’s gathered is did you resolve the then from objects prototype, any objects prototype at all. Essentially, is it not an owned property, is the only check. And then the last bit is is it something that is on -- was the then property resolved on a standard prototype. Because this was cursory data and I was just whipping it together for the purposes of roughly this presentation, I used as a surrogate for what is a standard proto, is inside of SpiderMonkey, we have a big enum full of the standard prototypes. And essentially if the prototype that you found the then property on resolves to one of these prototypes, it is, I call that a standard proto. + +MG: This is flawed metric for two reasons. One, I mentioned the idea of trying to fix this for web IDL stuff, and this doesn’t count any web IDL prototypes. So that would be flaw one, it doesn’t co-anything for web IDL, and ultimately would be kind of an under-count. The other thing is that it doesn’t actually address the question which I was hoping to also answer and didn’t realize until I was making this table I couldn’t really tell you, which is if the only thing we the was mark `Object.prototype` an internal proto, give it the internal slot, how often would we run into that on the web? I can’t give an answer to that. I do say that I would probably do that if we got to -- if we got to Stage 1, I would probably add that kind of telemetry. + +MG: The numbers are, well, what I have been learning from telemetry lately is that the numbers never match my expectations, so this is across four days in February. You can see that 2.2% of pages are getting, like, an actual then property. Of that, 2%, so the vast majority of them, are getting it off of a prototype. This probably makes sense. 0.13% are gettings it off of a standard prototype, which if I’m being very honest is quite a bit higher than I was hoping for. Like, on the order of an order of magnitude higher than I was hoping for. I don’t have answers to what kind of pages actually do this, are there real use cases that this is actually impacting. I don’t have any idea. I thought I would bring the data I do have to committee. + +MG: So this is a problem statement more than anything else. I’m not married to any of my solutions. I just wanted to highlight that this is a problem. We’ve seen it multiple times. Across multiple engines, it seems like something that we could do something about in committee. I would love to hear people’s suggestions for other answers, solutions, problems. Heck, even some suggestions for telemetry, like to know to drive this. I’m open to that. But, yes, Stage 1? And I guess questions. + +USA: So before we start with the queue, I would like to remind everyone that it’s a long queue. But, yeah, without further ado, first we have WH. + +WH: I just want to make sure I understand the previous slide correctly. Are you saying that one out of 20 `then` lookups find `then` on a standard proto? + +MG: No, no, this is a percentage. So this is one out of 1,000 pages -- + +WH: Yeah, but the total thenable percentage is 2%, and I’m dividing the two percentages. + +MG: The denominator on these a all the same, and it’s roughly the number of page loads encountered. So on a given day, Firefox loads, you know, whatever billion pages, and of that billion pages that get load, 2% encounter a thenable and 0.13% then count you are then only on a standard proto. + +WH: So 1 in 20 pages that resolve any thenable resolve one on a standard proto? + +MG: WH, that’s not necessarily correct, because you could have more than one thenable on a page load. + +WH: Okay. + +MG: And, like, the -- it is literally just a single bit of information from a page load. It does not have any indication of how often does that happen. If you had a page that, like, put a then on every single standard proto and resolved every single thing, it would still only show up as a count of one. + +WH: Okay, thank you. + +JHD: Yeah, so for -- you talked about three options. I’m just clarifying, if for if first one, I assume we use the AO we added for `iterator.prototype`, that if you try to set on it, it sets, like, where the target is inheriting from object prototype, it would just create a non-property? Because the AO called setter ignores prototype properties. + +MG: Maybe. I pulled out – + +JHD: You just don’t have those details. It just occurred to me during the slides. + +MG: It was really just sort of highlighting -- these were proposals from when we were doing the spec resolution, like the spec remediation, and I thought I would bring them as an example of things that could be done. And I don’t remember the exact a details of how that was supposed to work. + +KG: It does have to be exotic because you don’t want `then` in object proto to start passing. It would have to be an exotic. Anyway, my topic was: I do support Stage 1 for this or exploring this problem area. There’s definitely a lot of space for solutions or partial solutions in this area. I also wanted to hear your thoughts on the object prototype solution. Like, if you proposed this alternative, which suggests that you thought that was a reason to do something else, and I was wondering – + +MG: Generally from an engine perspective, making objects exotic is a pain, because what it does is it means that now you have to special case an object that is, like -- you have to give -- and especially on an object property definition and reading paths, like, making an object exotic has a cost. And, like, `Object.prototype` is a very important object. And so making that exotic feels wrong. It could very well be that we absolutely could do it and make it even work fast. But it just feels like the wrong approach. And also, it does feel a little confusing to people in a way that I feel like the promise resolution, just sort of ignoring it, is slightly less. I don’t know, it feels inharmonious to me, but I -- that is really a gut feeling there. + +KG: That’s very valid. On the, like -- how it will feel to people, my hope is that no one will ever know that we do any of this kind of thing unless they’re already digging around in the guts of stuff, so I’m not super worried about whether something will feel weird as long as you’ll just never run into it. I’m okay with that, like, doing arbitrarily weird things as long as they are, you know, doable efficiently in engines and no one has to know about it unless they’re trying to do something strange like put `.then` in `Object.prototype` in the first place. + +MG: This is where I really wish I had that split out telemetry, where I had split out `Object.prototype` specifically from all of the other standard protos. I did not and I regret it. But you found these numbers to be surprising, and so this is my only other feeling here, is, like, I too hope that nobody runs into this, but into says numbers already surprise me, so I mean, people are doing weird stuff out there. + +KG: Yeah, agreed. + +MAH: Yeah. So we are generally interested in the issue of reentrancy with the promises, and it wasn’t entirely clear to me with the presentation if all the issues that you have found, the CVEs you have experienced and so on, are due to synchronous reentrancy when sending thenables or if they’re just merely the fact that thenables exist and should possibly be adopted. If from what I understand the issues are synchronous re-entrancery handling thenables or have a custom logic during the promise resolve algorithm, I believe we should explore that problem and see if there is a way of having a basically safe promise resolve that is capable of not triggering any user code during that step. This is actually something brought all couple years ago that we were interested in trying to solve. I think this is a problem that is not specific to the spec or web IDL or so on, but is also something that user code may want to protect themselves against, and so I would like to explore the more general problem of synchronous promise reentrancy during -- that is triggered by thenable. That is not actually the only trigger. There is also the constructor property lookup happening during problem resolve. And it’s wider than the then property. + +MG: The constructor lookup thing is actually kind of interesting and I hadn’t really considered that, and I’d appreciate it if you’d open an issue on the repo that mentions that because I will 100% forget by the end of this call. The one thing I would say, and I hope at the bottom of the repo it says this already, but I do recognize that this does also potentially fit into the, like, general bucket of, like, invariant maintenance and opting in and out of things that the stabilize proposal had been talking about. + +MAH: This is independent of stabilize. This would be explicit promise resolve, so anybody that is interested in handling promises without -- while knowing they won’t trigger re-entrancery can adopt that operation. + +MG: Yeah, I would open an issue. I think that’s a good point. The one thing I did say is, like, it kind of feels like this internal proto thing is the kind of magic that you could imagine wanting to give users access to via the stabilize proposal. You know, terminating the lookup for this sort of thing. But I’m, as I said, I’m very not committed to any particular solution. I’m more just irked by how many CVEs this has caused and would love us to come to a solution that, like -- as I said, it doesn’t have to fix every problem. But if it makes this twice as safe, that would be great. Like, it just makes everybody’s lives a little bit easier if we can try to do that. + +MG: And the other thing I should mention here that I didn’t put in my slides, there’s also a possibility here that we just decide that maybe TC39 isn’t the right venue for this and ultimately this is a problem that could be solved or should be solved by the web IDL spec and we could talk about that as well. And that is getting this out of TC39 is also an option, but for myself, this is a problem that seems relevant at least to the people in this room, so I thought I would bring it. + +MAH: And I want to reiterate, very much interested in the general problem, that I would like to generalize the problem to just -- not just web IDL and gen implementations, and general how you handle promise objects safely without having re-entrancery. + +KG: So I want to -- this isn’t -- most of CVEs that I’ve seen aren’t about re-entrancy of problem objects. They’re about things being unexpectedly being treated as thenables when they weren’t intended to be thenable as at all. It’s not like you made something which you were expecting to await into a non-native promise, and that’s done something weird. It’s like the example that came up recently was the iterator result objects that are returned from async generators that have value and done and inherit property prototype and those were unexpectedly, you could make those into thenables by putting `Object.prototype`, but they weren’t intended to be promises at all, so it wasn’t exactly promises being reimplemented that was the problem. It was things unexpectedly being thenable that was the problem. Which is a slightly different issue. Also I want to point out, I put this in the Matrix, but in case you don’t see it there, there is thenables proposal by Justin, who hasn’t been participating much, called faster promise adoption that touches on some of this stuff, and I think for the specific problem that the constructor check, there’s a possible solution in that repository. It doesn’t have any actual overlap be this proposal, but is in a similar area. + +MG: Okay, thanks. I can’t see anything except my slides right now, but I will look when I’m done. + +JHD: So I have a couple things. Real quick, I just wanted to ask about the telemetry. It sounds like you said this is just a single bit of information, but is it possible to have more information, like, which standard proto object was it or, you know, things like That? + +MG: All things are possible, it depends on how much work you want to put in. In this case, I was taking the easy path, which is the -- what we call the use counter path, which is basically you name some property, and then when that property happens, you say, hey, it happened! But unfortunately it takes a bunch of overhead in order -- it’s a lot of writing of code in order to get this to work. And so adding the “hey, this happened and it was this thing” is a little more challenging. But what I would do is I would just take this particular bit and split it into two and say, okay, it was on a standard proto and then on object.then to give me one more, like, piece of insight. Longer term, if we do actually, like, want to pursue an idea where we’re actually really, really concerned about webcompat. I could start plumbing into into the more complicated bits of where do we see 24 and what are the paths that are being monkey-patched. It could do it, but it takes time and effort and this was supposed to be quick, I did it in an hour and a half. It was not intended to be bulletproof, you know, inarguable stuff. + +JHD: Okay, thank you. That clarifies, so my queue item was that it’s -- it sounds like you said part of your interest in option 3 was that it avoided making `Object.prototype` exotic, but if it has that slot, it’s exotic. So it doesn’t seem like it avoids that. And then, separately, if objects that have slot, there does need to be some sort of way to check that they have that slot, some form of brand check no matter direct or not. I’m not sure – + +MG: From the specification, it becomes maybe and… From an implementation standard, it becomes a very easy check of, like, I am walking the protochain. Is my object `Object.prototype`? Stop. The implementation does not have a real reified internal slot. It’s a fiction to actually talk about this. That’s it + +JHD: So obviously the -- this is -- the details of this are Stage 2 stuff. I wanted to raise the thinking that if you’re just trying to refer to, like, the current realm’s `Object.prototype`, that’s fine but if it’s a cross-realm slot thing, then it definitely makes it commotic (?) and needs some sort of brand check. But either way, I agree with everything that’s been said about Stage 1, whether it’s for the problem of promises or even the more general problem of re-entrancery and evaluating user code. But I think regardless of pursuing this, it seems prudent for web IDL to consider producing null objects instead of standard object because – + +MG: I think that ship has sailed too far in, like, that particular ship is gone. I would be shaken if that was web compatible. + +JHD: I mean, perhaps only for new objects it produces. But it seems worth not -- since web IDL itself is just a spec document, it seems worth trying to stop the bleeding if there’s something that is subpar in it. And I still think we should be pursuing this problem here, but just in parallel, that’s my suggestion to consider that. + +MG: Yes. + +JHD: I’m done. + +LCA: Yeah. On the use counter think, I think Firefox does not track what pages it actually saw the use counter increment on. But Chrome, for example, does, so if you have a use counter to Chrome, it would give back the list of pages that use counter actually hit. And then you can do more investigation to see, like, what actually is happening while looking at the source code. So maybe it’s – + +MG: I would love it if -- I will probably not hack in use counters into V8 for the purposes of this. But I would love it if somebody else did, especially given that that happens. That seems nuts to me, but, yes, that is a challenge that I have right now, I can tell you that there are these 0.13% of pages that load and do this thing. And I cannot tell you what they are. And, like, I have attempted to find some by, like, rummages around on the Internet with an instrumented browser myself to try to find them, and I have yet to do so. + +MG: I was surprised to discover that YouTube apparently used like an actual thenable in the middle of loading. I don’t know what for, but it does. But that, hmm, that’s all I can say right now. + +USA: Next we have a reply by DE. + +DE: So I’m a little bit skeptical of this assertion that there must be brand checks for anything involving an internal slot. I agree with you for a lot of the brands that we add, we should make -- check predicates for them, but we as a committee have not adopted on overall stance on this. + +JHD: That’s incorrect. In beginning of 2015, when I proposed trying to remove toStringTag from ES6, the committee had con sen they would not remove it but all built ins would, as they currently did, and we an oversight about error and argument objects, but -- and moving forward as well, all built ins would have brand checks moving forward. And so we have maintained that for all new things that we’ve added and that’s also part of the motivation for `Error.isError()`, and as far as I’m aware, there hasn’t been consensus to change that consensus. + +DE: And different people have different interpretations of what happened then. + +JHD: Certainly. + +DE: And I think before asserting that the committee has a policy, it would be good to do, as YSV proposed a while ago, proposing for a consensus like I have an the agenda for a different design principle, a particular design goal for the committee. And until then, I think any assertion that something must be some way would be better to stay -- I would like it to be like this, because – + +JHD: I did not say that the committee has such a policy. + +DE: You said it must be this way. + +JHD: Yes. Implied is because I feel it would be this way, and I would object if it were not, as everyone else in this room has today and will continue to have whenever they have an objection. I appreciate your note on my wording, and I do agree that having such a design goal document would be helpful. + +USA: So we’re at time. With that, MG, would you have time to stick around if we make an extension, would but interested, or do you want to come back to this later. + +MG: How much is left on the queue?. + +USA: There’s seven topics on the queue. + +MG: I have some time. Like, another 15 minutes. But that’s about it. + +USA: All right. That gives us roun – + +MG: One second. I shouldn’t speak before I know a certain—yes, I have some time. + +LCA: I support Stage 1. I think it’s great that you’re doing this investigation. And this would be outside of JavaScript and any polyfilled built ins would not be able to set this which wouldbe kind of unfortunate. And then additionally a lot of—at least in Node.JS and Deno and specifications with web IDL happened in JavaScript itself so it becomes not impossible but very annoying to have to set this flag on objects that aren’t actually created in JavaScript. So I’m just somewhat concerned about this option 3 unless there’s also a way to set this flag from JavaScript that then probably is closer to having a `symbol.thenable` method or something. + +MG: I haven’t thought about polyfilling it all. That’s an excellent point. I encourage you to open an issue on the repo so I don’t forget about it. As I said, not married to any solution. So I sort of don’t love the idea of getting another symbol, but I see your point about polyfilling. + +USA: In the queue we have SYG. + +SYG: What was my—yes, given that for the import defer thing we are already specific casing then, that says to me that we have some precedent for considering then a special evil that might be worth special casing. So while I agree with—I also hate the reentrancy problem and love to solve that and against the solving reentrancy and the bugs that I have seen that is surprising and not so much free entrant but because it lets things that aren’t promise shape that flow to things that aren’t promise shaped and that is the source of the bugs. User code in general in my problem the problem is not reentrancy but once you go to user code it invalidates your assumptions and loading things from certain slots and the shape of the thing that you’re expecting. I would like the goal for this proposal to be about preventing that class of bugs more than preventing the general class of reentrancy bugs. I support Stage 1. + +SYG: I guess the point is that I don’t really have qualms if it comes down to it if we think the most bang for the buck is special case then and something super weird and special case then and that case of bugs I’m happy with that. We’re already doing it with import deferred with the name space object act very weird in that one case as well. + +MG: Okay. I agree. + +MM: So you’re right, defer does special case then but it’s a very, very contained special casing in that the purpose of defer is to postpone when the module is getting evaluated and then to evaluate it on need, and the special case for then is just the special case about how early the buy need happens. It’s not a special casing that’s going to surprise many people but you’re certainly correct. I See NRO will want to clarify. Please do. + +NRO: So the special case with then in the proposal it’s not actually special casing how promises work, it’s that deferred main space objects don’t have the then property. That’s the only special casing. The eventual model and the object doesn’t have a then property. It avoids this problem but making sure it can never happen. + +MM: Thank you. I had missed those details. I think that makes the case stronger. So I am very interested in the reentrancy. I take seriously the point that KG made that a lot of the CVEs here are not about the reentrancy. Nevertheless as a Stage 1 proposal, I would very much appreciate the goals stated broadly enough that if the reentrancy problem that can be addressed pleasantly. I believe it can and if these other approaches all turn out to be infeasible as I suspect they are, we saw what we can and compatible with the Stage 1 problem statement and include the possibility of checking with the reentrance. + +MG: My preference is to keep it narrow. But I don’t really have the whole like what is if reentrancy problem look like and what the scope of solutions look like in my head? This is me say I don’t know yet. If someone can open a clear issue on the repo, I can think about it. I’m willing to keep pushing on this. If there is a nice harmonious solution that kills two birds with one stone, cool, I’m totally down for that. My preference being I would like to address the concrete CVE problem-solving problem and if the reentrancy thing can’t be done ina nice way with this, we should figure it into a separate proposal and figure it out that way. But in the short-term, I’m totally fine with piggybacking for now. + +GCN: I’m curious what the scope of this proposal is defined to be. I’m generally in favor of curtailing the power of thenables. The first proposal I made for TC39 was in that vein. I don’t understand is this proposal specifically about the promise resolve operation or more general? What is being targeted here? + +MG: My goal is basically I would like to and has the ideology of `Object.prototype` it doesn’t invoke that thing because web authors like—engine implementers forget about that behavior too often and this leads to bugs where an attacker is able to do something like force a GC to happen inside of the then and then it returns to executing code inside of the C++code and objects disappeared and then you get problems. That’s really my high priority scope. That’s what I would like to address. I proposed a slightly more general solution because I think there is a nice harmonious design in it. I’m worried from the telemetry numbers it may not be as web compatible as I hoped. There is some talk about the reentrancy problems with promises that I can’t really speak to without having more time to think about them. But the most concrete thing that I want is like `Object.prototype`.then should not cause some random object to become a thenable and invoke user code. + +GCN: Okay. I will—just as an example of something that I assume is out of scope of this, when you dynamically import a module and write `export function then` inside of the module, that function is called, I assume that is an example of something we won’t attempt to fix in this proposal. + +MG: No. + +GCN: Under any proposed solution that could direction that that could go, so basically what I just want to understand is there are a lot more weird examples like that which things are in and which things are out. + +MG: Mostly I would say like all of that module resolution machinery I would say is out. But this comes down to like is there a harmonious design that will be efficient to implement that will address the problem and ideally, you know, address like user’s expectations a little bit? I have no idea what people would expect if you exported then from the module. Is that expected? Is that something that people are running into it and it’s biting them? I have given it zero thought. But if it is like an instance of this more general class of problem, sure. I think we can shave this down a little bit over time as we potentially try to get to Stage 2. + +USA: We have a point of order that five minutes—actually four now remain in the time box. Next we have DE. + +DE: I want to request a couple more minutes to the time box just to get through the rest of the queue or otherwise we can raise—so I have a couple other ideas for how this could be resolved. If in the option 3, we care mostly about promises created by web IDL, web IDL could create a non-writable and non-configurable that is the original then method and make that from the internal slot or reading it from the later point. That’s one thing to consider. Another one if we want to solve the more general problem of things being unexpectedly thenable would be to make it so that the look up of the then property is a special look up that doesn’t bother to do a read if it’s the original object prototype of the current realm. I guess this would benefit from having your further web—this telemetry data that you have here. If that were expanded to, for example, anything with a null prototype gets skipped, that would solve the module things. I think that’s late for that web compatibility-wise. People are excited about that pattern for modules in particular. Maybe using it. Anyway, I’m very happy that you’re investigating this, you know, subclassing built in was kind of a mistake. I’m glad that we’re undoing it where it causes especially big problems. + +USA: We have a topic by RGN. + +RGN: A follow up to SYG’s last topic, he said he didn’t want to focus on reentrancy as the problem and went on to describe a scenario that to me seems more general than reentrancy. We often use that term as a shorthand but the reality is it’s really about an interleaving where code runs that other code wasn’t expecting and can have effects. So in that case, the boundary was from implementation code to user code. But the same kind of interleavings affect user code to user code. I’m a little hesitant to carve out the narrow space of “reentrancy” when we really are talking about a class of problems not just analogous but in fact more broad because non-reentrant code can still have effects at a distance. That’s exactly the kind of thing that we hope to avoid. + +SYG: Sorry. What was the question? + +RGN: I guess I’m looking for a clarification of how you think that the scenario you described is different from this generalization of reentrancy. + +SYG: So I think MM’s response I agreed with, which was that—so my concern is that I think solving the general reentrancy problem I considered to be harder and I have a much clearer idea on what that means and what the timeline there is. On the other hand, we know that the thenables corner is a sharp corner for security bugs, so the value here for this proposal I would like to be, you know, if you have to choose between solving the general reentrancy problem and solving it for this thenables corner that we keep getting bit by, I would like to prioritize solving that even if we couldn’t solve the entrance as part of this proposal. + +RGN: What if this doesn’t have to make the choice? + +SYG: If it doesn’t have to make the choice, that’s great. Mark said if you could find a harmonious way that solves two birds with one stone and I would like that. And I would like the user code interleaving problem. My hunch is that it might be more tractable, the thenable problem in itself is probably more tractable than the general problem. If it isn’t, that would be great and that is win-win. + +RGN: That response is helpful, thanks. + +USA: Next we have Chris on the queue that supports Stage 1, spending time exploring the problem space. And that’s it. So MG, would you like to ask the committee for consensus on something? + +MG: Do we have consensus on Stage 1? Sounds like the answer is yes. + +USA: I think so as well. Let’s give folks a few minutes to respond. If they have any other comments. + +MM: I just want to confirm that we are generalizing problems and considering the problem statement to be general enough to cover the reentrancy? I support Stage 1 with that understanding. + +MG: Yeah. I’m willing to look into it more and then we’ll look at the problems—we’ll look at the set of solutions we can come up with and see if there’s a middle ground and we’ll go from there. + +MM: Okay. And as you suggested, we’re perfectly happy to do this and continue a discussion. + +SYG: Sorry to interject. I was typing something. MM, I want to double check and back and forth a few times now. I want to double check if as part of the exploration there does not exist a good solution for the reentrancy problem and this problem, that doesn’t tank this proposal I think is worth solving this proposal even if we after spending some time don’t find a good general solution to reentrancy. + +MM: If this proposal could be solved in a way that is worth the cost, the existing approaches that were mentioned, none of them seem feasible in terms of regularity in the language but this is a Stage 1 exploration, so even if the reentrancy is not there, I don’t think I would block Stage 1 based on the infeasibility of the concrete approaches. Because if the problem can’t find a pleasant solution, that would be fine. + +MG: I want to look at the more concrete—sorry, the broader case mostly because I don’t have a good definition of all of these pieces in my head right now. People are using “reentrancy” in ways that I don’t think match how I think of reentrancy and I need to read some background here. I’m willing to approach it, but I did say my priority is let’s make thenables a little less power. If it helps with reentrancy. Cool we can put it in this proposal and take a look at it. If it goes badly and we can’t find a harmonious solution, I would like to split it out. + +MM: In response to SYG’s question to me, I think there might be misunderstanding. I’m not saying there’s a general solution for the general problem of reentrancy for the whole language. That would be very—I would be flabbergasted. Different reentrancy problems call for different solutions. What we’re suggesting is there’s a more feasible, more constrained approach to promise reentrancy than the ones that were concretely suggested in the proposal. And that’s specifically having to do with the safe form of `Promise.resolve` and fixing await to use the safe form. Obviously I will be clarifying that on the issue list. But that’s the only sense in which it’s a more general solution to reentrancy. It’s not solving reentrancy problems in general. + +SYG: Got it. Okay. I think that satisfies me. Yeah, I think all I was saying is don’t let perfect be the enemy of good. This is a real problem. + +DE: I just want to agree with what SYG was saying, in particular reducing the likelihood of CVEs is worth a lot and if that means that we end up having more complexity, I think it’s worth that cost. So MM was saying he found it unlikely that something would be worth this complexity cost if—actually I wasn’t sure under which conditions. But I would be okay with taking something that’s a bit messy if it reduces the likelihood of CVEs. + +KG: Strongly agree with Dan. The cost of CVEs is paid by users of the web. The cost of the language being a little more complicated especially in the weird dark corner that no one looks like is paid by only the people in this room. + +USA: That was all of the queue. Mathieu, do you want to respond to that or make any final remarks before we move on with the – + +MG: No. I’ve already overshot my time box extremely badly. I would prefer to stop. + +### Speaker's Summary of Key Points + +* Broad support for making some kind of change here, even if it’s a bit messy and unprincipled, if this fixes the risk of vulnerabilities. +* Some interest in more broadly attempting to solve promise re-entrancy. Matthew is OK with taking look at this as part of this conclusion, but some on committee also would prefer to not stop “good” for “perfect”. + +### Conclusion + +* Stage 1 achieved + +## `Math.clamp` for Stage 1 or 2 + +Presenter: Oliver Medhurst (OMT) + +* [proposal](https://github.com/CanadaHonk/proposal-math-clamp) +* [slides](https://docs.google.com/presentation/d/14QGuyCHlsSr4ZSCkbFuaFZk8793EAMS1nAkdW_csLhA/edit) + +OMT: I would like to propose adding `Math.clamp` to two numbers. Mostly because it’s in many codebases and not needing the boilerplate would be a better experience. It should also improve performance. Instead of having like the example shows `Math.min` and `Math.max` it should have one call. And hopefully that’s a single one and help with optimization. Other languages all call it clamp. There’s some arguing on the arguments where it should be min,val,max. And there’s an NPM package called clamp that is referring to a delete and lodash implementation mostly gets over a hundred thousand. These are used by min, max. And learning from these, the name is essentially standard. The name is the arguments. + +OMT: So propose doing val, min, max for now. No coercion as the point of modern proposals go for. If the limits are not a number for integer just return it. It doesn’t comply with the suggestions but it makes sense with the functions. + +OMT: So I propose moving to Stage 2 which might be a hot take. But I think it matches the process because there’s a preferred solution. I think the language should have this as `Math.clamp`. The design may change significantly. That’s all out of Stage 2. There’s already a spec text and proposal document and everything. I can share the spec text. + +USA: The spec text is visible. + +JHD: I definitely support this. I actually already reviewed the spec as well. I volunteer to be a reviewer if it were to achieve Stage 2. I think this is great. I think if the only concern from the room is the argument order, that’s definitely something to be resolved within Stage 2. My personal preference on it is what it is right now, because I don’t have any familiarity with using it in CSS any way and everywhere else I have seen it on a computer in my life it has been in this order. That’s also the way I describe it in English, clamp X between Y and Z. And that’s it. + +NRO: So I support this for Stage 1 or even 2, I guess. But for the order of the arguments, I think we should try to match CSS more than what others are doing. Nobody would be using the levers anymore while people will be writing clamp for JavaScript versus CSS for the application. It’s better to have two function the same than being on the single platform be aligned than not. + +??? (unknown): I see the confusion. Would you still support this for Stage 2? + +NRO: Yes. + +??? (unknown): Thanks. + +LCA: I think we should—we have a bunch of comments on the same topic. We should do them all in this topic. All other program language use val, min, max, we should not diverge from that because CSS does something weird. + +MF: I support CSS order. I also don’t think we’re going to come to an agreement on that. Back to my point, I think we should explore the prototype method that was suggested in some of the issues. Like `Number.prototype.clamp` where the this value is the target and then you pass a min and max that hopefully we can at least agree on will be min first and max second. Though I’m not sure at this point with how the conversations have gone. I still think Stage 2 is appropriate even with that level of design change still up in the air. So I would not oppose Stage 2 advancement. I would like to at least during that stage see that prototype method explored a little bit more. + +SFC: Yeah, mostly just to echo what MF just said, like, I think the slides—OMT finished the slides in two minutes and asking for Stage 2. There’s still a lot of design space here. Prototype function is one of them and NaN handler is another and ordering is another. I mean, I think it’s a fine thing to do. The motivation is basically like, well, look, there’s all these other libraries that do it so therefore we should do it which is usually fine. It seems rushed to skip to Stage 2. I won’t block Stage 2. It seems like there’s still quite a bit of design space. + +OMT: I agree with the design space. My main argument is that according to the process document, that’s fine for Stage 2. So as far as I know, Stage 1 is like deciding on the problem. I think always everyone in this room agrees that having it make sense this is not doing anything. + +WH: This mostly looks good. There are two controversies here. One is if the value is NaN — I think that returning NaN is the right answer, but we should discuss that. The other thing that bothers me more is that `Math.clamp(x, 0, -0)` throws, which seems strange since +0 equals -0. + +OMT: I originally read the spec text during Tokyo and I think I spoke with Troy (?) and someone else. + +WH: Line 6 of the algorithm on the currently displayed slide. + +OMT: I think that was decided to avoid confusion. I’m open to changing it if people think it’s better. + +WH: None of the existing implementations would throw in that case. The result should just be, I guess, +0. + +OMT: I’m looking to do that. + +SFC: I just less than operator semantics seems probably what we should follow in terms of minus and plus zero. If we want to be stricter, we could. + +EAO [on queue]: + 1 for Stage 2 + +SYG: I don’t really have any complaints about Stage 2 here. But I do want to express—urge caution I suppose about the faster point. I’m skeptical in production engines that this will be meaningfully faster. Probably not until you hit the optimizing tier. Even if the optimizing tier there may be—you could do a bunch of stuff today if you see maxorf min at that point. So I still think it seems a good thing to have given that it is a stand-alone operation that’s easier to read intent into than max and min. So that’s fine. I don’t want to oversell on the faster bit. + +OMT: Yeah, I agree. I just view it as a potential bonus. I don’t think—it’s not a potential downside. + +KM: I agree. I think other engines probably will see through this and remove it or convert it into the same, the optimal code. + +KG: I want to call people’s attention to the NaN issue. What do you with NaN for these inputs. It’s contentious that across other languages that NaN for the value argument just means the result is NaN. There’s not nearly as like consensus for a NaN for the min and max arguments. I kind of prefer throwing because I like rejecting invalid values but I see the case for putting just returning NaN in those cases as well because that better matches what you would be doing otherwise. I just want to call people’s attention to this question that we can absolutely resolve later. + +KM: You do any validational stuff, I think once you have NaN and stuff, like, I agree probably wasn’t you start drawing, you might lose a lot of your performance because you do a bunch of checks that you just allow for weird behavior, then probably what people would write most of the time, then they care about the performance, they would—this is going to be slower than that. + +MLS: A way to do reply, KM did the reply and got both my points almost completely. There’s seven checks for exceptions and that’s a lot of coding you need to do to make that happen. + +DE: This proposal seems very useful for everyday coding. The details we’re talking about are important to iterate on and Stage 2 I think makes sense as the time to iterate on them. I have other opinions but they don’t matter that much. + +MM: Just bringing up that the notion of clamp makes as much sense for BigInt. That’s not an objection as well. I support this going even to Stage 2. I thought I would raise that to get your thoughts. + +OMT: I was talking to JHD. There’s an issue in the proposal repo about BigInt. It’s more of a question of functions do it so would that—I’m definitely open to doing it with consensus. + +JHD: I mean, in general there’s the contention around that is that some of the math methods should work for BigInts. Some of them obviously can’t. Everyone is not equally convinced that we should be—that some should support BigInts and some shouldn’t. Some think it should be all or nothing. + +MM: I have the same misgivings. I support Stage 2. + +SFC: I agree it makes sense for BigInts and also make sense for decimal and all of the type. Does it make sense for dates and Temporal objects? Does it make sense for anything else that can be compared in the language? It’s an interesting question to ponder. But I think a question that is important to think about is let’s just say we limit it to numbers and BigInts. Fine. Now we have to do a brand check in the `Math.clamp` function. If we have prototypes we don’t have to do that type of thing. That seems like a design question that should be answered quite early on in the design process. + +MLS: Another type we should probably put on the prototype for those types instead of putting it as dot because math is good for numbers. + +DE: As MLS said, `Math` is for Numbers. We explicitly decided during the design of `BigInt` not to extend `Math.min` or `max` to BigInts. The idea is you’re not doing generic programming over different numeric types. Anyway, I don’t mind the idea of doing it as a method on `Nummber.prototype` but including it in `Math` would also be quite consistent with the design of the rest of the math names base. I think it does make sense to start with only numbers for this. It’s the most useful basic case. + +JHD: I want to add the benefit of putting it on the prototype, it does resolve all of the ordering questions. It makes the order the same as this actually because the receiver is the first implicit argument. So that might be an expedient path for the proposal. + +SFC: This is a slightly bigger one. The slides were very thin on motivation, there is one slide basically said look there’s like these languages that implemented and there’s these NPM module with the downloads. What is the actual use case? When should I be using clamp? A lot of times when I’m using clamp, I actually don’t want to clamp. I actually want to maybe take a number and put it on the distribution or something. Like maybe I want it to—if I’m trying to clamp between minus one and positive one, maybe I want to get a value that is 0.99 something depending how close to one it is. I don’t know. Clamp is a very easy tool to reach for. I can see the argument that it’s useful in a general purpose programming language, right? If it’s like a mathematical operation, if you’re doing it on floating point numbers, like, is that always the right tool for the job? When you put something into the standard library, like, we should put things in the standard library that are like the correct tool for the job. I think that’s the principle that we held with Temporal that we’re looking at with decimal and other things. Like, we’re trying to nudge developers to do the right thing, right? Just because the clamp module in NPM is popular does not mean it’s the right thing to do. That’s something that I would really like to see in these slides. Like, technically speaking by the TC39 process, that’s a Stage 2 concern. Like, you know, answers to that question we should answer before we say we’re committing to adding the feature, right? I understand there’s… + +SFC: These are showing me code examples. But what is the—again, it’s showing me look at all of these examples of modules that do the thing. Here are some code examples. But it’s not really showing me, you know, what problem I am trying to solve. That’s a different question. This is good evidence + +OMT: I agree that it could encourage bad usage but at least personally add to the quotation just for the general purpose clamp, probably double digits in the past five years. That is motivation. I agree it is nice to get some concrete uses of like here is why a clamp is useful. + +WH: SFC, if you’re proposing `Math.clamp` turn out-of-range values into 0.99 when clamping to the interval [-1, 1] depending how far out of range they were, this will violate one of two math principles. One principle is value monotonicity and the other is that values between min and max should not change. You can’t have these two true at the same time and do this kind of smoothing. So I would not support extending this to something more general. + +SFC: I didn’t mean to propose that. What I’m saying is there’s certain cases where I want to see the use cases. Might be some cases where these semantics are the correct ones to apply. I’m saying there’s maybe use cases where these are not the correct semantics to apply. And some other semantics are the ones that are actually correct. But this is an easier function to use, it’s just like right in front of you, you might choose to use one even if these are not the right semantics. Are these the correct semantics 90% of the time? 70% of the time? Or is it 20% of the time? That percentage should also factor in whether we should add it to the standard library. + +DE: This is a very simple proposal. I think that is a good thing about it. And I think the analysis to determine how to work out all the cases, that’s important analysis. But it’s also relatively simple. Overthinking it, or prematurely generalizing it, won’t necessarily lead us to better results. + +SFC: I totally hear what DE said. I also think it is our responsibility to do that leg work. The most simple proposals are good. Simple proposals are not always the correct proposals. So a simple proposal for Temporal would have been to just have a `Temporal.Instant` type. But we ended up spending a lot more time to figure out what is the actual thing that solves the real problem that developers have? And looking forward to, like, there’s a lot of simple proposals that I would love to just add but the simple proposal is not always the right solution. Sometimes it is. Sometimes the simple proposal is the right solution. I think later in the agenda we have the stable formatting update and we’re proposing the simple solution despite some flaws it has because we think the right solution. This is a question we should answer. It’s our responsibility to answer it. If we’re just publishing a library on NPM, the bar is lower. As a committee, the bar should be quite a bit higher. + +JSL [via queue]: + stage 2… think we can/should have a separate discussion about Math support for Bigint. Definitely needed + +EAO: It’s come up a couple of times here to consider adding clamp on the `Number.prototype`. I just want to note that we have nothing like clamp on `Number.prototype` and the functions we do have as methods are almost always something to something producing the string and starting to add methods on `Number.prototype` would I think be a much different bigger change than this little proposal is. I like this thing. I think it should go to Stage 2 as it is. + +SFC: Just sort of summarize what I had said before in terms of—I have concerns about Stage 2. It sounded like a lot of support for Stage 2 in the room. I didn’t discuss this with the other Google delegates. Given that I don’t think I have authority to have Stage 2. The concerns I voiced. Stage 2 is okay but I have concerns of that. Thank you. + +[Consensus?] + +NRO: If we are not sure about Stage 2 given that we have some somewhat significant design space when it comes to prototype versus static method, we’ve been with clamp for many years and one more meeting to get to 2 won’t in any way – + +OMT: I’m happy to go to Stage 1. I think strong enough concerns were raised. + +CDA [via queue]: +1 for Stage 1. Indifferent on Stage 2. + +KG: Just the same thing. Feels like there’s a fair bit of design space left over to be going to Stage 2. I’m happy with Stage 1. + +OMT: I originally proposed Stage 2 because I didn’t consider—why we have the committee. Is there consensus for Stage 1? + +DLM [via queue]: support Stage 1. And share concerns about Stage 2. + +LCA: I’m not sure why we don’t consider that Stage 2 is still reasonable to figure out the exact solution. Reading the process document, Stage 1 the committee expects to devote time to examine the identified space and full breadth of solutions and cross cutting concerns and the outcome should be particular space for solution and solution space and I think that is done and Stage 2 is they chosen the solution space and the design is draft and may still change significantly that is exactly what this is. I think by the process, this is—we are in agreement for Stage 2. I don’t understand why for the people who are not in favor of Stage 2, could you clarify what makes you think that this should be in Stage 1 and not in Stage 2. + +USA: We have a couple on the queue. Be quick. + +JHD: Setting aside the spec text tweaks like the NaN stuff that would often happen in Stage 2, the only two possible shapes are whether the heading where it says `Math.clamp` on Step 1 or section 1 whether that says `Math.clamp` with three arguments or `Number.prototype`.clamp with two arguments and something would be the this in the method, I don’t think we ever considered the location of the function as a major semantic before. I would agree with LCA this is ready for Stage 2 even if we have to have this location discussion. + +KG: I mean, I guess it just depends on what you consider the location of the function to be major semantics. I think for the larger proposal we probably wouldn’t. But since this proposal is so small, this feels like basically the whole content of the proposal is still to be decided. I don’t feel very strongly about this. I think that when we say we worked out all of the major semantics, I usually take that to include stuff like are we adding a new `Number.prototype` method or not? That feels like a big question to me. Again, that’s just vibes. I don’t feel super strongly. If people want to go to Stage 2, I’m fine with that. It feels like a large thing to leave open to go to Stage 2. + +LCA: I want to reply. Sure. The process document specifically says in Stage 2, work out minor API details such as API names and I think this could very well be considered like the name of the API. Is it on math or prototype? + +KG: It doesn’t feel like the name to me. The placement of an API is a bigger question than its name. + +SFC: The prototype versus static function I think I can see an argument either way about whether that’s Stage 1 or Stage 2 concern. The thing I consider more Stage 2 concern is when we say Stage 2, we say quote the committee expects the feature to be developed and eventually included in the standard end quote. That means we agree we want this in the language. But I heard two threads that make me question whether we agreed on that. The one is this more performative and MLS said is this more performative than `Math.min` and is this for the job? Those are concerns that should be resolved before we go to Stage 2 as far as I’m concerned and the thing about where does the function live is like could be considered either way. I hope that answers the question. + +DE: I’m wondering does anybody else have further requests or anyone who spoke already have further questions for OMT’s next stage of research? I think we are doing a lot of good ideas, but some of the requests were kind of open-ended. If you have any specific research action items that you think should be taken would be good. + +OMT: I was going to say I guess the original reason I pushed this to Stage 2, I didn’t consider this. More than happy with Stage 1 today. Also please file issues on the proposal. + +USA: Sounds like the committee is overwhelmingly in support of Stage 1. Nothing else on the queue. So you have Stage 1. + +### Speaker's Summary of Key Points + +* Generally agreed having a clamp function is good +* Concern for stage 2 raised over `Math.clamp` or `Number.prototype.clamp` +* Order of arguments was mostly agreed to be (val, min, max) +* Further research into specific usage was suggested + +### Conclusion + +* Consensus on stage 1 +* Decide upon Math or prototype before advancing further + +## Immutable ArrayBuffer + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-immutable-arraybuffer) +* [slides](https://github.com/tc39/proposal-immutable-arraybuffer/blob/main/immu-arraybuffer-talks/immu-arrayBuffers-stage2-7-as-presented.key) + +MM: As you’ve heard me ask before, I would like permission to record during my presentation that includes audio QA that happens during the presentation itself, and then at the end of the presentation, when I break for questions, then we will stop the recording. Is that okay with everyone? Does anybody object? Okay, great. Go for it. + +MM: Okay. So last time, we got Immutable ArrayBuffers to Stage 2. Thank you, all. We’ve been working hard on it since then, and I want to, this meeting, try for Stage 2.7. We’ll give you a status update and tell you about what’s happened since we got Stage 2. + +MM: So recap, last time, this is the proposed API change as of the Stage 2 request, which has two new members. A transferToImmutable method, and an immutable accessor. transferToImmutable method produces an ArrayBuffer of the immutable ArrayBuffer flavor. The immutable accessor of course is true for an immutable array and false otherwise. Still recap, this is, in some sense, the punch line of the proposal, which is: the immutable ArrayBuffer enables freezable TypedArrays, part of the proposal was a change to spec text for TypedArrays such that during the construction of a TypedArray on an ArrayBuffer, if the ArrayBuffer is immutable, then the indexed properties created on the TypedArray are created as non-configurable, non-writable data parameters. Whereas otherwise they’re created as configurable data properties, I think configurable writable, that cannot be made non-configurable. So with this change, the TypedArray as a whole is still born not frozen because it’s extensible and you can add the properties to it, but it means that you can freeze it. It was the previous reusable of the properties to become non-configurable that prevented the freezing of TypedArrays. + +MM: Last time, this is the road to Stage 2. We got all of these. To get to Stage 2.7, the most important thing is resolving all the normative issues. So there were three normative issues that we resolved and closed, which is: should transferToImmutable take an optional newByteLength argument to parallel the transfer method and the transferToFixedSize method? We decide that it should. That’s a change. Should the new accessor be named immutable or mutable? There were interesting arguments each way. We resolved immutable, which is what the Stage 2 proposal already was, and we did that for easy upgrade, basically for feature testing. That way if you write code, you check if thing is immutable and you’re on an older version of JavaScript that does not implement the proposal, then the answer will be falsy. Then should we add a method sliceToImmutable by a analogy with slice, and the reason that -- the strong motivation for this is without heroic implementation tricks, if you have an immutable ArrayBuffer and you do a sliceToImmutable on it, it can give you back a new immutable ArrayBuffer with zero copy, given that the original was also immutable. And that can be a window into it or -- and then that enables you to then, as we’re going to get to structured clone, transmit that between agents, both zero copy, but without giving the other agent access to more than the window in your slice. + +MM: Okay, there is a fourth normative issue that we put on the table last time, which is order of operations, which includes when to throw and when to silently do nothing. We purposely did not close this, although we wrote the spec text for concreteness using our preferred solution, but that is not a strong stance that we’re taking. We’re leaving this purposely open because we want to guide it primarily to implementer feedback. If an order of operations allows implementers to do some simple and high speed thing, and another order of operations interferes with either existing implementations or optimization opportunities, we want to take all of that into account before resolving this issue. + +MM: So with those three issues closed, transfer to immutable has a new optional length argument, parallel to transferToFixedLength, sliceToImmutable looks like slice, except it produces an ArrayBuffer of the immutable flavor, and we did not change the name of the immutable accessor. The corresponding immutable ArrayBuffer flavor is much like what you saw last time, but of course extended with the new sliceToImmutable method, so the two slice methods are still enabled and in the immutable buffer flavor because they are query only. All of the mutation methods, all the transfer methods and the resize methods all throw. And of course, the immutable accessor for this flavor says true and byteLength and maxByteLength are the same. + +MM: Now, we’ve also listed a bunch of non-normative issues, or non-normative for Stage 2.7. These are issues we want to put on the table and start pursuing now. One of them is applicability to WebGPU buffer mapping. We got feedback from WebGPU folks, and the answer is no, because of the nature of immutable ArrayBuffers, they do not apply to what the WebGPU folk need, but the limited ArrayBuffers, by Jack Works’ proposal, related that proposal, that I believe is scheduled after this proposal– + +USA: Yep. + +MM: —is a good time to get into more discussion of that. Mentioned proposed integration with structured cloning, we did a lot better than that. RGN wrote a proposed mod to the HTML spec, specifically the structured cloning part of the spec, and asked specifically how to explain how structured clone would deal with immutable ArrayBuffers. And of course, there were some details there, but the overall result is exactly what you would expect. + +MM: Zero copy operations on the web, this was a mixed bag. There’s a lot of issues that this breaks down to, a lot of subissues, which we will then get into in the next two slides. + +MM: And then “update shim according to issue resolutions”. So I wrote the previous shim, and I updated the shim to track what the proposal is, and we not only have an implementation of the shim, but we have a bunch of codes that makes use of the shim for useful purposes, and it gave us some lesson on what this is like to use, which, you know, the punch line being it’s pleasant to use. + +MM: All right, zero copy operations on the web. So I am not going to go through these one at a time. I’m putting the slide up right now mostly to give you guys a chance to scan your eyes over them and to notice things you want to ask about when we break for QA. However, I’ll mention a few particular things. + +MM: On issue 300, we got a response just -- that says one hour ago, just in time—“overall I’m supportive on this, however, I’ve got a bunch of open questions about whether it can be made compatible” with what they’re already doing, which is somewhat entrenched. I don’t think that this points out any obvious incompatibilities, just uncertainty about whether it could be compatible. So I’ll take this as overall guardedly positive. + +MM: And on Wasm issue 1162, it says “we discussed immutable ArrayBuffer at the CG meeting last week. No blocking concerns, and the proposal is orthogonal to Wasm linear memory for now”. So no particular connection or help to Wasm, but no interference either. So no blocking concerns. And the proposal as it stands doesn’t preclude read-only memory to Wasm. + +MM: The second page of the prior proposals that list the zero copy concerns on the web and related issues. So the web transport issue 131, I initially got a strongly negative from Domenic that was supported by, I forget who else, but two people came out with a strong "unlikely that it would apply” because the web transport issue is between address spaces. And strategy right now is to copy buffers when communicating buffers between address spaces. And for small and medium size buffers, that makes perfect sense. However, when transmitting huge immutable ArrayBuffers, we should keep in mind the possibility—implementer’s choice since it makes no observable difference—the possibility of transmitting them by memory mapping rather than copy language. For a huge enough ArrayBuffer, that takes a long time rather than linear time, so let’s say something in multiple gigs. + +MM: And it just so happens that over lunch, I was discussing with SFC the CLDR table, which is a great example of a big data table, and there will be many big data tables that are of interest to many programs, many written in JavaScript. And these CLDR tables can be multiple gigabytes. So once a data table gets into the gigabytes, transmitting that by mapping, despite all of the weird operating system shenanigans, mapping is clearly more efficient than copies. Whether it’s worth the complexity inside the implementation is another matter that I will let implementers worry about. + +MM: And in light of the possibility of taking big, big data tables and sharing it zero copy, a follow-on proposal, which I purposely did not include in this proposal, a follow-on proposal that I want to trail behind this proposal, is to add a new import type, which let’s call it binary, that if you import a file binary, the result of the import is an immutable ArrayBuffer. So that would be the one case where you can end up with an immutable ArrayBuffer other than by populating a mutable ArrayBuffer and then doing a transfer to immutable. This would directly be born as an immutable ArrayBuffer. So it’s basically a binary asset to be loaded by a program. And I think this is in the world of multiple import types, I think this is very natural. + +MM: I awkwardly, because I don’t totally understand the tool support for looking at web standards, I did some screenshots of the diffs of RGN’s modifications to the structured clone algorithm, and are showing the excerpts that are have to do with immutable ArrayBuffer, so this is obviously just adding a case for the immutable ArrayBuffer to this branch of cases here. This over here is also -- yeah, the same, that you can share ArrayBuffers or immutable, there’s already a carved out sharing ArrayBuffers without detaching, so immutable ArrayBuffers can also be shared without detaching. Obviously sharing ArrayBuffers are already detachable and immutable buffers are already non-detachable. And then finally including immutable ArrayBuffer in the your explicitly enumerated taxonomy is kind of array, so if you have questions about the HTML language here; hopefully RGN is on the line and can answer those. + +MM: Okay. Implementer feedback, we would like more of this, but my understanding is that we don’t need more of it to cross the 2.7 threshold. We have a full excess native implementation of the entire spec. It looks good, and it does not suggest any changes. We have our own shim implementation at Agoric together with practical uses of it, and Agoric uses both Node and V8 for running some of our code, and uses XS for other code. So on XS, our plan of course is to use their native implementation, and we wrote our usage code so that it would work with both. + +MM: The shim, you know, got updated to follow the changes we made to the spec, but the shim has this crucial line that it falls short of the proposal in the following ways. Basically the key thing is that there’s no practical way for a shim to emulate efficiently freezable TypedArrays, because much of our motivation going into this is there’s no way to create a freezable TypedArray in the language as it is now, so, therefore, there’s no way to shim it. + +MM: Okay. Approval steps. Thank you to JHD and KG and SYG for the approvals, for MF, I just talked to him verbally in the hallway. He says he defers to KG and SYG. And we got an email from WH that says “looks good to me, with just one comment: why does sliceToImmutable diverge from slice when _end_ < _start_?” So I think I agree with WH’s opinion here. But this is in the domain of the purposely left open issue of order of operations and which things throw what. So if WH wants this change made before approval, I’m perfectly happy to do that before this meeting is over. But in any case, that’s where we stand now. + +MM: Just as a reminder, this is the checklist of what’s still left for Stage 3 and for Stage 4. + +MM: So that is the presentation, and now I will take questions, and let us all stop recording. + +JSL: Just kind of to get my head around the mental model, is the expectation that the +`immutable` property would extend out to the host, and ArrayBuffer is passed down the native code, like V8, would that be -- could it be immutable as well? + +MM: So the immutability represents a two-way guarantee. It’s a guarantee that the JavaScript code cannot modify it and it’s a guarantee that the data contents are stable, so because of that guarantee there’s need to worry about changes from under it. So how an implementation implements that guarantees so that holds across all the participants in the same zero copy ArrayBuffer is up to the implementer, but if the implementer fails to implement that guarantee, then that implementation would not conform to the spec as we’ve written it. And that’s on purpose. And the reason I’m going into this in some depth is there a lot of discussion on the issues, which I recommend looking at, about use of mprotect (?), use of memory manager protection making the pages actually read-only with nobody having a read-write copy of it. And that’s optional as long as the guarantee is adequately upheld by the implementation, but it’s certainly a belt and suspenders approach, and probably only to be taken on huge ArrayBuffers, where we can afford an immutable intervention. We wouldn’t do it on a 4K ArrayBuffer. + +KG: I think this is probably worth calling out in the spec, though, just as an editorial note. You can technically read it off, the fact that hosts aren’t allowed the change immutable buffers, and you can technically read this off of the essential invariants for internal object methods when something is defined as non-configurable, non-writable, then one of the variants that is required from everything is that it can’t actually change value, but this is a kind of hard thing to infer, and very few people are aware of the invariants. + +MM: We would be overjoyed to make this more explicit. + +KG: To just a note, somewhat. + +MM: So, yes, let me just ask, since we are asking for 2.7. + +KG: That can happen later. + +MM: Okay, great. But we would be overjoyed to be more explicit about that. Since it can be inferred, I have a procedural question for you. Does being more explicit about something that’s already implied from the spec language, do we have to tag that as a non-normative note or can we make it normative? + +KG: I don’t think that it makes sense to talk about notes being normative. + +MM: Okay, we -- can we state it normatively? + +KG: You can state it normatively if you want, but I would probably just put a note that calls attention to the fact that it is already normatively required because of other things. + +MM: Okay. Assertions in the spec are normative. + +KG: No. Assertions are strictly editorial. They describe properties which already hold. That, in fact, if you click on assert, it takes you to the definition, and the definition of assert says this is describing a property which already holds. If this property does not hold, it’s an editorial error in the specification. It is something that is necessarily true because of other properties or other guarantees that are normatively spelled out. + +MM: Okay, I believe you that that’s what the current language says. I’ll just say that I’m shocked because I was in discussion, and that’s not what I thought the conclusion was. + +KG: What discussion? + +MM: We thought that both things that had to be in agreement were both stated normatively, such that if they disagreed, then the spec was in an inconsistent state and one could not make a normative derivation from the spec until it was fixed. + +KG: Yes, when an assert does not hold, that means that the spec is incoherent, but it’s not because the assert is a normative requirement. It’s because the assert is said to be describing a property which holds, and if in fact that property does not hold, then, like, by definition the spec is incoherent. And usually you need to fix that by making a normative adjustment. Sometimes you can fix that by changing the property which is asserted, but if the assert doesn’t hold, the spec is incoherent. + +MM: I would love to keep talking about this, because it’s not quite what I understood, but let’s not take up our time in this. + +CDA: Yeah, just noting we have technically only a couple minutes left. We can go to the top of the hour, but there are three other items on the queue. SFC? + +SFC: Yeah, just to be brief, I love that your slides, MM, went over all the resolved issues and previous proposals, how you gave a mention to the CLDR case, that you’ll hopefully work towards as you move forward. And I really appreciate how you moved thoroughly over all the issues in the milestones, and I feel confident in the quality of the proposal. + +MM: Great. Thank you. + +WH: So I just wanted to talk about my comment about `slice`. If we were designing `slice` from scratch, I would agree that throwing on _end_ < _start_ would be sensible, but we already have lots of instances of `slice` in the language, and I think it would be better to stay consistent with them. This should be resolved by Stage 2.7, because this has nothing to do with implementation experience. + +MM: Okay. So let me first of all just ask all champions in earshot, which I think is all of them. Are all of us agreed to make the change that WH suggests? He has talked me into it. It is better to be consistent with the mistake than to fix the mistake in one place and not the other. + +RGN: I’m convinced. + +MM: Okay. Great. RGN, since you’re the steward of the actual spec language, could you do that before this TC39 meeting adjourns? + +RGN: Certainly. + +MM: Great. Thank you. + +WH: If you commit to fixing this, you can mark me as approved. I’m not going to be here on the last day of the meeting. + +MM: Okay, thank you. I will mark you as approved and be sure to make that change. (Note: both done) + +CDA: We have less than 3 minutes left and still we have four items on the queue. OMT? + +OMT: Yeah, I just wanted to say I haven’t read the whole spec text, but I like that the would be useful to my implementation. + +MM: Great, thank you. + +CDA: I sorry, I didn’t notice that was an end of message, but thank you for the message. + +SYG: I said this in my review and I would like to review this for the stream, I consider this a Stage 3 blocker in that I do not want to advance to Stage 3 until that PR is reviewed and merged. It is fine to merge things that take dependencies on not yet standardized JS features, that has happened in the past in HTML, so that is not an issue. I don’t think there’s much reason for concern there, but I would just like to point that out, that I would like to add that extra constraint for moving from 2.7 to 3 in addition to the test262 tests. + +MM: Yes, understand. The HTML spec being approved on the HTML side is a blocker for Stage 3. I don’t know if you want to call it normative, but it’s a blocker in any case. I do have a question for you. Have you looked at the structured clone spec text that RGN wrote, and do you have any concerns with them specifically? + +SYG: Only at a glance, and they seem fine to me. But, you know, getting it reviewed and merged into the HTML spec also involves I think in the issue, in the template that RGN made there, a bunch of checked boxes so, yeah, it’s good to get them checked. + +MM: Great, thank you. + +CDA: Ashley just has a reminder that the slide link is missing on the agenda. So if you have a chance -- + +MM: I’ll fix that before the TC39 meeting is over. (Note: Was fixed a few days after) + +CDA: Great, thank you. And then last one is KG. + +KG: Yeah, sorry, this -- I want to walk back the claim I made previously about it already being fully implied that hosts couldn’t modify immutable ArrayBuffers. Technically that only applies when someone actually observes one of the values. So in principle, a sufficiently strict reading could allow mutability. + +MM: Wow. + +KG: Like, between the time that you create it and the time that you observe it. So I think it sounds like we can just have consensus that the intention is that it be immutable, and we can state normatively it is immutable. And that doesn’t need to withhold 2.7, because it’s fairly straightforward to state. But, yeah,… (Note: fixed in spec) + +MM: Great. Thank you. + +CDA: And we are. JLS has a message, just noting the web crypto might need to be updated to account for immutable ArrayBuffers as well. Eg, `crypto.randomArrayValues` prens (?) TypedArray. Not Stage 3 blocking, I would think. + +MM: Thank you. Do I have Stage 2.7, first asking for—we have plenty of affirmation on the QA there. Does anybody object to Stage 2.7? I think I have Stage 2.7. Thank you. + +CDA: Okay. I guess that’s a +1 from DE on the queue. + +### Speaker's Summary of Key Points + +* All prior normative issues dealt with, except order-of-ops, to be driven by implementor feedback. +* Lots of feedback from html side, mostly positive, no blockers + +### Conclusion + +* Got all approvals needed +* Got stage 2.7 +* Much still needed on html side to get stage 3 + +## Limited ArrayBuffer + +Presenter: Jack Works (JWK) + +* [proposal](https://github.com/tc39/proposal-limited-arraybuffer) +* [slides](​​https://docs.google.com/presentation/d/1u6JsSeInvm6F4OrmCSLubtDvFVdjw1ESeE5-c_YflHE/) + +JWK: I am going to talk about the limited ArrayBuffer proposal. Here is the timeline of some other proposals. The oldest one is the read-only collection by MM, and it’s still Stage 1. Two years later, I proposed the limited ArrayBuffer proposal, that is the original version, which will be talked about later. I referred to this proposal (when designing the API), and at the same time, the resizable ArrayBuffer came in, and it went very quickly. Another proposal, `ArrayBuffer.transfer()`, was split out from the resizable ArrayBuffer. Then in December, MM proposed the immutable ArrayBuffer again. Therefore part of the motivations are replaced by the immutable ArrayBuffer proposal. And the original design of the limited ArrayBuffer proposal is, like, trying to freeze things in place. But immutable ArrayBuffer and transfer brought us a new API design style, transferToImmutable. + +JWK: Here is the original motivation of the limited ArrayBuffer proposal. The first one is, that we cannot make an ArrayBuffer read-only, which means the underlying bits can always be changed. The second one is, that you cannot give others a read-only view to the ArrayBuffer, whether the underlying ArrayBuffer is writable or not, and keep the read-write view internally. And the third one is, you cannot give others a slice of your ArrayBuffer that the holder of that view cannot expand to the whole ArrayBuffer. Let’s say, for example, there is part of memory in WebAssembly, and you want to give a slice of the program memory to the other parties, so they can change it. But you only want them to change the memory in the given slice, which is not possible today. + +JWK: Since we have an immutable ArrayBuffer today, part of the motivation is replaced. The first one is replaced by transferToImmutable. And for the other two usages, there are some potential use cases, and let me introduce them. The first one, “give others a read-only view while keeping the read-write view internally”, is the WebGPU case. In this case, they need to expose some device memory and they do not want JS programmers to change it. Meanwhile, the memory itself might be changed by some host code. Therefore we cannot expose them as immutable because the contents will change. + +JWK: In this case, I think this is very suitable for the limited ArrayBuffer proposal because we can assume there is a read-write ArrayBuffer but never exposed as a read-write view. There is no way in the JS world that a JS programmer can modify the ArrayBuffer, but the ArrayBuffer itself is not immutable. The mutable handle is kept by the host, in this case WebGPU and they can only receive a read-only view of it. The benefit of this is, that WebGPU does not need to introduce a new kind of exotic view of ArrayBuffer that cannot be created in the user-land. + +JWK: Another use case is a limited range. I just mentioned before that in some cases, you might want to share a slice of memory, but not all of them, to another party. I wonder if these two use cases still sound compelling, and if it is, I will update the motivation to remove the first one, since it’s already taken place by the immutable ArrayBuffer proposal, and continue investigating the other two. And if both of these use cases are not compelling, I may want to withdraw this proposal. + +KG: I think this is still very valuable, especially the read-only view. So like read-only buffers, there’s several web specs that have expressed interest in this. I think it’s still worth doing. + +JWK: Thank you. + +PFC: So just to check my understanding of what a limited ArrayBuffer is for, is it correct to say it’s a read-only buffer that is mutable by other code – + +JWK: This is the third one, the limited range. Wait sorry this is the second one. Yes. The limited write. + +PFC: We get the same thing in the third case, though. Right? + +JWK: In the third case, you can give others a read-only or read-write slice. Those two features are unrelated to each other. There are two things that are limited—we try to limit in this proposal the first one is the read or writability, and the second one is the range. And you can limit write or you can limit range, or you can limit both. + +JSL [via queue]: definite + 1 to keeping in proposal. Very valuable. + +MM [via queue]: still looks quite useful + 1 to keeping it at Stage 1. + +JRL: So in dealing with TextDecoder and WebStreams and other APIs that receive typed arrays. Any time you hand off a TypedArray to another piece of code, if that code doesn’t track the byte offset and length, it’s going to read the full TypedArray. It is very common in user code to just call `decode`(buffer)`, and now it decoded everything. + +JWK: Yes + +JRL: I have had that happen many times where I pass something to a library that takes a TypedArray, but it does not respect the bounds I placed on it. Having a limited view window where it cannot access anything outside of the window I gave it would be so as much cleaner for a lot of APIs. + +JWK: Yes. I have also hit by this problem. + +SYG: So just a word of warning. The implementation cost could be high. I am not sure how you would like to expose the capability to have a limited ArrayBuffer that aliases another ArrayBuffer. There are several layers of implementation for ArrayBuffer / ArrayBufferView. It sounds like you have to do that to other ArrayBuffers. And I think it’s too early for me to really give any criticism of that. That might be the best design here. But that kind of buffer management in engines is kind of scary. And the cost here could be high. We should be mindful as we are designing for the use case. + +JWK: Yes. I will try to make it simpler for the implementation as possible. Like, in the old version, it says freeze in place, which might be very complicated, but now it’s changed to something like transfer. So it will be much more—at least they’d share the same as transfer. + +SYG: But transfer detaches the source. How do you provide a smaller view that is aliased to the same buffer without detaching the source? + +JWK: I have an API in mind, that might look like this + +```javascript +view = new Uint8Array(buffer, { readonly: true}) +view.buffer // undefined +``` + +SYG: I see + +JWK: So you cannot retrieve the whole ArrayBuffer from it to re-construct a read-write view. + +SYG: I see. It doesn’t sound so bad. + +MM: Thank you. + +CDA: Michael? + +MLS: I want to reiterate a little bit of what SYG is saying. If you are going to use always like it’s likely easier for an implementation to share on OS page boundaries, beginning and ending would likely require doing some range checking for any access. So it could be more costly. This actually hold as little bit to what just Mark presented in his proposal. + +JWK: Does that mean, if we tried to align things (e.g. align by 4k), they can be easier to share? + +MLS: (???) on page boundaries, 4K or 16K, something like that. And the underlying OS calls also do things on the same kind of boundaries. + +JWK: Thank you. I am not quite sure about the machinery of this. + +JWK: It looks like many delegates expressed that we should stay in Stage 1 and continue to express the solution. I guess my topic is done. Thank you. + +CDA: Thank you. The proposal remains at Stage 1. + +### Speaker's Summary of Key Points + +* Original use cases: freeze arraybuffer, limit write (of view), limit range (of view) +* Now: Remove the first one. Limit write: use case by WebGPU, limit range: use case by WebAsm + +### Conclusion + +* Many of delegates expressed support, so not withdrawing. +* Shu expressed concerns about impl complexity. +* MLS expressed concerns about impl complexity of limiting range. + +Stay in stage 1. Continue exploring. + +## `Number.isSafeNumeric` + +Presenter: ZiJian Liu (LIU) + +* [proposal](https://github.com/Lxxyx/proposal-number-is-safe-numeric) +* [slides](https://docs.google.com/presentation/d/1Noxi5L0jnikYce1h7X67FnjMUbkBQAcMDNafkM7bF4A/edit?usp=sharing) + +LIU: Okay. Hello, everyone. I am LIU from AliBaba, and this is my first proposal at TC39. The proposal is going to add a new method `Number.isSafeNumeric`. The method is going to test JavaScript strings converted to JavaScript numbers. At first, not validation part. In web development, validating strings that can be safely converted to JavaScript numbers is a common requirement. Here I am going to list a use case. + +LIU: The first use case is API data. For our use case, we need to handle it with normal string and need to process with some values just like null, undefined, empty string. And our backend system is used JavaScript. We need to process with Java.Long for the overflow problem. And the second use case is form input validation. We need to handle with falsy values, white pace and unexpected characters. And the third is financial calculations. When we try to convert a string to number we face a new problem, the mathematical changes in a string to number conversion. And the last is data processing. We always need to write some complex validation logic for validating strings. So I think validating strings directly impacts the stability, data accuracy and user experience of web apps. But current solutions have significant limitations. Let’s look into the problem. + +LIU: The first problem is inconsistent built-in methods. I choose method number, parseInt and parseFloat. Just look into the table. When we input an empty string or string just containing whitespace, the number method will output zero. the parseInt and parseFloat will get another number. It’s inconsistent behavior. And about leading decimal point handling or scientific notation, both have some differences. So I think of the first problem. Inconsistent behavior of built-in methods. This will increase implementer overhead because the variable always needs to remember which method should be used and need to handle with each case manually. + +LIU: The second problem is hidden value change of built-in method. Here I am giving two examples. The first is big numbers, which can be bigger than integer. You can see when we transform string to number, the mathematical value changes due to double format. The second example is floating point numbers with 19 significant digits. The mathematical value change and never can be converted back. So we think, this is another problem. The hidden mathematical value changes and the user doesn’t get any running notification. So the web developer will try to use this value. They will get the wrong result. This will increase the web developer (frustration?) + +LIU: And the problem exists when you want to write down custom validation function. Here I am going to find the question from StackOverflow. How can I check if a string is a valid number? I use the top-rated answer and try to check with the numeric string. You can see the smallest number, still have the same problem. Math [KA*L] variable string, convert string to number. So I am trying to look at NPM libraries. I choose validator and is-number. Both have a large number of downloads of the mathematical number values converting string to number. This is because I think they only check that the numeric string satisfies the decimal format. But they are not looking at the value safety problem. This is a bad experience because we may have wrong value or some data consistency issues. Like backend, numeric string. When you convert to number or convert it back, the value changes. So this is a mismatch. + +LIU: So in here, I like to provide a new solution called `Number.isSafeNumeric`. It has benefits. Ensure input is a valid numeric string, reducing unexpected behaviors during parsing and subsequent operations. The second is avoiding the string’s mathematical value changing during string-number conversions. Developers may not be aware of this. But I think we can avoid this problem. The third is reducing developer mental overhead. Developers may not handle the case manually. We just want to provide a simple and reliable way. + +LIU: The key of the method is safety definition. In here we the definition to pass, one for we want to—this the (???) the string by default, which means, it should only contain ASCII digits with optional single leading minus sign. It must have digits for both sides. And with no leading zeros, except for the decimal numbers smaller than 1. No whitespace or other characters allowed. You can see the examples. + +LIU: The second part is, value safety. I think the most important of this is, the mathematical value of the string must be within the range of MAX_SAFE_INTEGER. And the mathematical value represented by the string must remain unchanged through the string ToNumber and toString conversation process. Just like the code shown below. Mathematical value of the string. This means the mathematical value is preserved, and we avoid some problems of mathematical value change. + +LIU: And after we create a list proposal, we receive many questions. So I created FAQ part. The first question is, why use strict number format rules by default and not support other formats? First, we think about validating decimal strings we focus on fundamental, in JavaScript programming is widely you would. And we can ensure consistent parsing across different systems. Like 1e5 is 10,000 in JavaScript but may be treated as a string in other systems. So this may produce some unexpected behaviors. And reduce complexity in data processing and validation. + +LIU: And first, we also consider adding a second parameter to support more formats and the parsing options. Firstly, we can support scientific notation with the format option. We can yield more decimal. The default option. So it only accepts decimal formality. Second, we can use format number, aligns with decimal and scientific notations supported. And it also can support more flexible parsing with a loose option which supports with leading sign—leading parse sign, leading decimal point, with whitespace. Less behavior is aligned with JavaScript numbers. Because when we talk about many people, we found there are already many systems or many older code, already accept some non-standard decimals. But they accept by JavaScript number, it’s supported with more options, solve less problems in the future + +LIU: And another question; how to handle subsequent numeric calculations? I think this proposal is focussed on ensuring numeric string representation is safe to be converted to a JavaScript number. So for high-precision, decimal calculations, you can refer to decimal libraries like decimal.js or the upcoming proposal decimal. How that does that relate to decimal? The decimal proposal creates a new type of process calculations. But this proposal just checks the string can be safely converted to a JavaScript number. Question? + +WH: Having read through this proposal, I have strong concerns with this breaking interoperability. This creates the problem of converting a Number to a string that’s parseable by isSafeNumeric. And the way this thing is defined now, that’s impossible. It’s impossible to take a number and convert it to a string for which isSafeNumeric will return true. Without that, you have no interoperability and I am not sure what you have accomplished. Also, there are other issues in here, such as the mathematical value restrictions that make it so 0.1 will fail, since the mathematical value of 0.1 is different from converting the 0.1 to a Number. Other things fail which shouldn’t. I don’t understand the MAX_SAFE_INTEGER condition. It has nothing to do with whether the conversion is exact or not. + +WH: So I would like to define some principles for this. One principle is that there must be some simple way for a developer who has a Number to be able to print it in such a way that isSafeNumeric is true on that string and parsing it will return the same Number. + +LIU: Let me look into the question. Yes. I think the safe numeric is determined to—I have to consider this problem. I think numeric string considered to be safe is—let me check the proposal. Is satisfied with string remain unchanged through the string number string conversion process. When a JavaScript strings convert to a JavaScript number, there may be something stored in the JavaScript number system, may change it. + +WH: An example, 0.1 will fail this. + +LIU: 0.1 may be stored in JavaScript, but I think it can be converted back when converting to the string. + +WH: Okay. Sorry, I see. You are doing ToString of a ToNumber. But ToString is not unique so there are plenty of numbers for which this will fail. + +MF: Yeah. I support everything WH said. As well, I think this proposal is pretty confused, and not very well motivated. It was claimed the mathematical value changes, and, you know, what that means is, I think, a string representation of a number would be given that is not the exemplar of the range of reals that is represented by that particular float. But, like, that doesn’t mean it’s a worse number in any way. That’s how floats work. You are referring to a range of numbers. I don’t think this actually is practically a useful thing we are talking about. I don’t like the allusion about `Number.MAX_SAFE_INTEGER`. We are not talking about all of the integral floats below it as safe, we are saying that is the upper bound of where you can do a +1. And that’s a single point, rather than talking about all of them. I don’t think it’s really well motivated, and I am not convinced by what I am seeing. + +LIU: I mean, if maximum—with one—not one doing the mathematical value of a numeric string representation, when converted ToNumber and the change and the number can be converted back, so I think this is a real use case. + +SFC: Yeah. Thank you for the presentation. I will be honest, when I saw this on the agenda, and when I saw the initial repository, I was, like, … I am not sure this is well motivated. But actually, your slides and the evidence you presented showing how, like, users frequently do this operation wrong, and how, like, highly voted answers on StackOverflow are also wrong, makes the motivation to me seem more compelling. So thank you for the presentation. I appreciate that. + +SFC: I agree with what is on the slide right here. Like, does the number of the round trip through the string, is this the correct invariant? This is an invariant that I think 90% of the people in this room understand. But, like, the average JavaScript developer doesn’t. + +SFC: The thing about max safe integer is not necessary. That seems like a discussion that can be had, you know, in Stage 2 or something like that. + +SFC: I had one possible suggestion, which is, like, having the function return a boolean seems awkward—I was wondering if, like, we can have a function like parseSafe. There’s already parseFloat. There can be parseSafe which does this and returns a number and throwing an exception if it’s not safe. + +SFC: Generally, the motivation about, well, you have user inputted numbers and things and number that comes from a bunch of weird sources and I want to make sure that you are not losing data, you are not losing people’s financial data. It seems like there is something there. Thanks for the presentation. + +LIU: Thank you. I think for the API names, we have considered many options. And I think because user already receive some just like some weird string, they want to identify if the string can be safely passed. So I use isSafeNumeric. Thanks for your response. + +OMT: Yeah. I was going to say, I agree with Shane. But I think it would be nicer to use it as it returns the value if it’s valid. But I would say, instead of returning a bool, you just do not-as-NaN (?) parseSafe. Like the parseFloat functions. + +JDH: So this is I guess a different question that probably touches on the same thing that SFC and OMT just talked about. I was asking, what are the use cases where you need to know if it’s safe? But you aren’t trying to transform it to a number. If there are some, I would love to know about them. If there are not, it’s a parse method that would be more appropriate. But I just—also, I think I’ve put the queue item on, I think this slide, the way I normally do this when starting with a string, I convert to a number and back to a string and see if it’s the same string. If it is, I am good. If it’s not, then I do my error handling, whatever that means. And that doesn’t strike me as something that is difficult to get right if you are starting out with a string. And any number that is so large that is using exponential will be revealed by this process and so on. So I guess, the first question was what are the use cases? The second question is, why is, like, take out the mathematical value part and why is that === expression not sufficient? + +LIU: Most use cases, do you mean we just want to determine the string can be represented in, say, because we when we use string, we may use it in subsequent operations, just like some calculation or any other things. If a string variable means unsafe, we think we should not handle it. Or just use some high precision library for whatever your use case. + +JDH: Okay. So I guess that makes sense. Are you trying to avoid the costs of the ToNumber? Because that check would still give you that—if the value wasn’t the same, then you know you need to do something different. I don’t know if that ToNumber is costly. It doesn’t seem like it would be, but… I don’t know. + +SYG: So I want to +1 what WH and MF are saying. I’m also confused by the motivation around mathematical value. Maybe that could be cleared up if the actual property you want is around string. I have concerns about—I don’t know how to build intuition about this set of rules—is this the right set of rules. You have pointed to, there’s some user validation use case. But then you made some opinionated choices like, you can’t skip the initial zero. It has to be `0.1` and not just `.1`. Why is that the right choice to make in the thing for user validation? If I want to accept `.1`, because I want my users to type `.1`, I am out of luck. Why does this meet the bar for standardization? + +LIU: Yes. You know, when we try to use three rules, because we think it’s used by users, so it’s easier to understand and it’s the standard for decimals. With some trailing zeros or other format, I think it can be converted to use a second format, this is one choice, by default or just what JavaScript number does. Maybe loose by default. Here we chose strict by default, because we think that is what the developers want. + +SYG: I think I need more than an assertion that this is what developers want. + +LIU: I think we need more time to investigate our list. Because by default, when you look at the string, you can think this is right. And trailing zeros is not forbidden. So I think trailing zeros is rarely accepted by the rules. + +PFC: Thanks for the presentation. I thought it was very clear. And I would support this proposal going to Stage 1 for exploring the problem space. I do want to say that I am skeptical about this particular definition of numeric safety, especially if you go to slide 10, the one you were on a moment ago, I am skeptical about why 1234.5678 is safe, the bottom one on the left is safe, and 0.123456… on the bottom right is not safe. Because when I think about parsing a string, building a mathematical value different from the number and the string, that’s—that’s the case for both of those. And so I would think both need to be on the same side of either valid or invalid, or we need to define it in some way that doesn’t reference mathematical values. So yeah. I think it would be crucial before Stage 2 to sort out which semantics we want exactly. And would like to see insight into what use cases people want for this. So, like, if you want to dine like here on the slide, 1234.5678 is safe, what are people using that number for? If they determine that that is safe? Even though the mathematical value of the 64-bit floating point is not equal to that string. So yeah. Before Stage 2, I would be interested in seeing more of what this is used for. + +LIU: Yeah. Thank you. I think we need more feedback about the list values safety definition. Before we submitted the list for this proposal, we just accepted some questions. 1234.5678 should be a safe value. Because when we consider the ways of rare number value, the number start in JavaScript changes. So it may be unsafe, but in developer mind, I just input a normal floating numbers and we convert it back, the list should be used. So I think although maybe we have some precision lost, but when you convert it back, I think it should be safe. But the list values safety definition is still the current solution. So I think we need to find a more appropriate solution for this. + +SFC: Yeah. Just to add on to the bottom two rows there, I think the invariant that is intended here, especially given that it’s about the string, you know, a particular instanceof a float64 represents, you know, an infinite amount of number. But there’s exactly one of them—well, WH said, there’s not exactly one. But there’s one number that, like, is the representative of that equivalence class. And, like, on the left, 1234.5678 is the representative of the equivalence class. That represents the equivalence class. The number on the right, it’s not safe. But I agree, it’s worth writing down very, very, very specifically what we’re actually testing for. + +???: It’s the shortest value. + +???: I think that’s an interesting recommendation. + +LIU: Yes. I think JavaScript has maybe the shortest decimal formatting of numbers. So I think the list is same problem of—because we just want to convert a string to number and convert it back from point, this should be safe. But any better algorithm or any better solution, I think we need more time to investigate. + +WH: In regards to SFC’s point, it would reject `0.10` because it’s not the shortest representation of `0.1`. + +WH: I do want to emphasize the ability to round trip between numbers and safe strings is essential. So I would like to see what techniques you would have for converting a number to a safe numeric string. As this is now, it’s impossible. You can make it possible, but I do wonder about numbers with very small or large exponents. + +LIU: Actually, here is the problem we’re facing. When we try to compare with strings, the format will change just like the shortest decimal representation. 1.10 will become 1.1. 0.10 will become 0.1. It’s not equal. So we are defining it as mathematical value. But currently, there’s no way to get the mathematical value of the spec. So this means, slides added code. So I think we need to find a better way to compare with real mathematical value, but another search string representation. Yes. This maybe needs more time just to getting more feedback from how to get a real mathematical value. + +WH: I was addressing SFC’s point, which was to require shortest representation. + +SFC: Just a small note on my—one issue, if you go with the definition of being the shortest, there are cases where there’s two values with equivalent length, those are the same equivalence class, so we need to think about how we handle those. Do we take the one that is lowest, highest, or the one in the middle? If that’s what we decide to do. I have an example that I can post in an issue somewhere. + +KG: Yeah. To add on to that, not to respond. Strictly speaking the spec allows implementation a choice of what toString does, probably the same cases. Where the last digit is not necessarily defined by the spec and it has implementations can do whatever they want. Which is not something to reify here. The spec could be made not to give implementations freedom here. I don’t think that there is an actual difference in implementations. I could be wrong. But there is a suggested definition in the spec, which is I guess, whichever one rounds to even, I think. But it’s something to address before using the ToString definition. + +LIU: This is the last question. Can we promote this proposal to Stage 1? + +CDA: Just noting there was some voices of support for Stage 1 earlier. We have +1 from DE. Some folks are asking for the problem statement. + +KG: In Stage 1, we are agreeing on a problem statement that we are interested in exploring solutions for. It is not clear what the problem statement here is. Can you try to say in a sentence what is the problem we’re trying to solve? + +LIU: We’re trying to solve a problem. This is, what you use is not what you get. Because we know whether the testing string—whether testing the string is writing a JavaScript number or just converting string to a number is a requirement and big number or small number with more than 17 significant digits, the mathematical value changes. So you are trying to display something but in reality the number changes so you cannot get the clear result. I think less most important part of the spec. What you see is what you get. + +MF: I don’t agree with that statement. What you say you want is what you get. You may have, like, additional digits there that you don’t feel are represented, but trust me, the float you get is representing that number that you are writing down. That’s why I feel this proposal is still confused if that’s the problem statement. + +RBN: I wanted to point out to Michael’s comment, your comment is accurate if I am specifically converting a string literal to a number. What I see I expect to get because I wrote that. If I am doing input validation, I want to validate that the input that the user writes is what they are actually expecting to get when doing that calculation. I don’t think that’s accurate when talking about input validation which is what this primarily is a main reason to have this feature. + +KG?: Ron, could you make a problem statement then? It seems like you have a good sense what it’s for. + +RBN: I think I mentioned this, it seems to me that the goal for this is to validate that the input that you provide to the function would produce a number without any loss of precision, and if it cannot produce a number that is exactly represents what is written without loss of precision, it would return false. + +KG?: I don’t know what loss of precision means, if we are allowing `0.1` as an input. + +RBN: I can’t speak more to that unfortunately. + +SYG: I am also similarly confused. The use case I heard in passing was that, if you cannot represent a thing as a double float64, and we don’t know what exactly that means, but suppose we did—then you would, like, dynamically use the representation, a user-land library or something? I don’t understand the end to end use case. If you decide the input is exactly representable, let’s take the most charitable reading we know, which is like it round trips to an exemplar string or something. Ignoring very small exponents(?). And you store it and represent it in your runtime as a number, as a float64, you are still opting into the world of floating point arithmetic later. Right? You’re storing the number to do stuff with it. Like, we can’t really—it seems weird that you would just try to verify at the input. There’s no way for us to guarantee that, like, you never lose precision, depending on what you do it. I am also confused on the use cases. + +LIU: I think due to floating numbers being stored in double precise, the precision loose would happen when significant digits can not be stored. But also, with short, the decimal format, the JavaScript member will round to the correct value. So if the input value and the toString value is equal, I think they can be stored as safe. So maybe still precise loss happens. But this is maybe what developers want. + +CDA: We have a couple of minutes left. For this topic and tore the day. MLS, did you want to chime in here? + +MLS: Not only about the problem areas, I would also like to know the use cases. + +WH: My position is similar to MF’s. I don’t understand the problem statement here. + +CDA: So very little time left. So you do have support for Stage 1 from folks who feel like they understand, or have some idea of what that problem statement is. Noting that we don’t have a formal problem statement, like, stated succinctly. So given that, for the folks who would like to better understand the problem statement, are those concerns blocking at this point? + +WH: The entrance criteria for Stage 1 include having a problem statement we agree on, and we don’t seem to agree on one. + +SYG?: Sometimes we reject a proposal for Stage 1 because we have, like, understood what is being discussed and said that actually we don’t want to add that into the language. And that’s not what is happening here. We are not, like, rejecting the proposal. But I am not comfortable going to Stage 1 with a proposal that I don’t understand what it’s trying to do. Since that is the point of Stage 1. If it was just me, I would be happy to do it off-line. But it sounds like there are other people who don’t understand what we are trying to do. No, I don’t think it should go to Stage 1 at this time. + +CDA: Okay. So the ask here, LIU, if you could, please, not right now in real time, but today, tomorrow, try and develop that succinct problem statement that the committee could consider, and then we can come back and ask for Stage 1 based on what that problem statement is. + +>> That can happen at this meeting if we have extra time. + +JHD: I have a queue item to request for that. Once you come up with the problem statement, could you file an issue on the proposal repo and drop in matrix and we can review it before we leave this week? + +LIU: Yes. I can create an issue and post link to Matrix. + +CDA: Let’s follow-up off-line and revisit later in the meeting. Ideally, tomorrow afternoon, if possible. + +### Speaker's Summary of Key Points + +* Still need consider about safety definition +* Provide more examples for this use case + +### Conclusion + +* Required to provide a 'problem statement' which succinctly describes the problem your proposal is intended to solve diff --git a/meetings/2025-02/february-19.md b/meetings/2025-02/february-19.md new file mode 100644 index 0000000..53b353b --- /dev/null +++ b/meetings/2025-02/february-19.md @@ -0,0 +1,1329 @@ +# 106th TC39 Meeting | 19 February 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|--------------------| +| Kevin Gibbons | KG | F5 | +| Keith Miller | KM | Apple Inc | +| Chris de Almeida | CDA | IBM | +| Dmitry Makhnev | DJM | JetBrains | +| Oliver Medhurst | OMT | Invited Expert | +| Waldemar Horwat | WH | Invited Expert | +| Ujjwal Sharma | USA | Igalia | +| Andreu Botella | ABO | Igalia | +| Daniel Ehrenberg | DE | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Luis Pardo | LFP | Microsoft | +| Michael Saboff | MLS | Apple Inc | +| Linus Groh | LGH | Bloomberg | +| Erik Marks | REK | Consensys | +| Shane F Carr | SFC | Google | +| Chip Morningstar | CM | Consensys | +| Daniel Minor | DLM | Mozilla | +| Sergey Rubanov | SRV | Invited Expert | +| Justin Grant | JGT | Invited Expert | +| Ron Buckton | RBN | Microsoft | +| Nicolò Ribaudo | NRO | Igalia | +| Jesse Alama | JMN | Igalia | +| Samina Husain | SHN | Ecma | +| Istvan Sebestyen | IS | Ecma | +| Eemeli Aro | EAO | Mozilla | +| Aki Rose Braun | AKI | Ecma International | +| J. S. Choi | JSC | Invited Expert | + +## A unified vision for measure and decimal + +Presenter: Jesse Alama (JMN) and Eemeli Aro (EAO) + +* proposals: [measure](https://github.com/tc39/proposal-measure/), [decimal](https://github.com/tc39/proposal-decimal/) +* [slides](https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity) + +JMN: Good morning everyone. This is JMN. And also working with BAN on this. My colleague is working on the measure proposal. The measure side of things. Originally the intention is that we would present this together. But BAN is unfortunately on medical leave. I’m taking the reins of these temporarily. You may know me from the Decimal proposal for a long time now. The intention of this presentation is to give you an update about how we currently think about things with decimal and measure living together. This is not a stage advancement, this is just essentially a Stage 1 update. + +JMN: There was a last minute addition to give this presentation a bit more concrete detail. EAO will chip in one or two at the very end. Are you there? + +EAO: I’m here. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/1] + +JMN: Great. The decimal proposal is all about exact decimal numbers for JavaScript. The purpose of exact decimal numbers is to eliminate, or at least severely reduce, the kind of rounding errors that are frequently seen with our friends binary floats especially when handling human numeric data and especially when calculating these values. Not just representing these things and converting toStrings but making sure when we do calculations with these numbers that we get the results we expect. I know that we really love the topic of numbers. Yesterday’s discussion at the end there actually sort of overlapped a little bit with decimal as you might see here. + +JMN: So just to make things very clear, in the decimal world, we imagine that when we write 1.3, when we construct a decimal value from 1.3, those digits, that really will be 1.3 instead of an approximation thereof. To illustrate arithmetic and calculation, 0.1 and 0.2 in this world really would be 0.3. Again, it’s not 'about the same', but they really are exactly the same thing. So that’s the decimal side of things. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/2] + +JMN: The Measure proposal which was fairly new. I think the idea has been kind of talked about for a long time and sort of exists in the Intl world. But the measure proposal presented a couple plenaries ago by BAN is about the idea of tagging numbers with a unit. So think about just the kind of units that we use in everyday life, grams, liters and so on. The idea is that we can tag these numbers with the precision as well. Here is just to cut to the meat of it and think about let’s say 30 centimetres there. The idea is we could convert these measurements or measures to other units and perhaps also specify some kind of additional precision there. So think about this 30 centimeters versus 30.00 centimeters and so on. This is another thing to show additional kinds of calculations or at least operations on these kinds of measurements taking for instance—sorry for the non-imperial friends—but using feet and inches is also something that we would like to handle in this kind of proposal. So think about 5.5 feet, that’s actually 5 feet and 6 inches and construct the component of these things. So that’s the idea of Measure in a very simple form. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/3] + +JMN: What is interesting is that although the Decimal proposal is really about numbers per se and Measure is about something a bit different they have distinct needs and there are distinct use cases. But they do share an interesting overlap. That’s the purpose of this presentation today to draw our attention to this overlap, because these proposals are helping us to represent numbers the way that humans often use numbers. Usually we talk about handling numbers in some kind of human-consumable way we’re talking about base ten numbers and the kind of arithmetic and rounding involved with that. Decimal is also sort of about precision as well. And there’s some kind of units there. So these are common things that you see when we talk about numbers and human representations of numbers. And these two proposals are sitting at least there’s a part of them that overlaps in our handling of these two things or all of these things. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/4] + +JMN: We can think about how Measure can use Decimal. There’s an interesting possibility there. Because Measure needs some way of having some kind of underlying mathematical value, some kind of numeric value there. So Intl actually currently uses mathematical values to avoid some floating-point errors. Measure, for instance, could directly use decimals. So look at this code example where we take say 1.2 and construct a measurement of 1.2 grams with some kind of precision of 10^-1. Decimal objects could be also upgradeable to Measure object as well. There’s the conceptual overlap between the two. That comes up in terms of code samples like this. All of this is still very much in discussion. So what I’m proposing here is not anything that’s final. I’m just trying to get your creative juices flowing thinking about how these two proposals interact and overlap with one another. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/5] + +JMN: What’s interesting here is that there are a few different kinds of data that we could be talking about. And one of the proposals that has been presented here many times and I know has a lot of fans is the Temporal proposal. And we propose that Temporal can be a source of inspiration and learning for us. Because we know that Temporal we have a lot of different concepts that are being strictly separated from one another. So in Temporal we have things like PlainTime, PlainDate, PlainDateTime, ZonedDateTime and so on. These are separate. You might say that the API is strongly typed. If there’s any kind of conversion that needs to happen that needs to be explicit. That has a number of benefits for the developer. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/7] + +JMN: So the question for us here thinking about Measure and Decimal and the overlap between them is whether there’s some kind of unified system perhaps that can be identified sitting between these two proposals. So we have different information with different types available to the developer. That’s the challenge for us. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/8] + +JMN: So there are a couple of different topics here. Let’s be explicit what we’re talking about. We think about base-10 numbers. And two questions. We’re talking about something with a unit like gram or feet, that’s one question. And another dimension could be there’s some kind of precision there. Does the number itself tell you just by reading it off from the digits about how precise this is going to be? We have four possibilities there: + +* So, for instance, in the Decimal proposal, we talked many times about this concept of “normalized” [canonicalized] decimals, where we strip any trailing zeros. So, for instance, 1.20 just is 1.2. So that would be something in which we don’t expose precision and just has no unit, because it is just a number. +* In previous discussions about decimal, we also talked about the full IEEE-754 approach that we actually have loaded and discussed many times in plenary. It’s called the full IEEE-754. This is a representation of decimal numbers in which precision really is present on the number. So the number does contain not simply a mathematical value, but also an indication of how precise this value is. Or, in other words, possibly some trailing zeros are present there. +* We also have things like numbers with a unit but with no precision. So some kind of exact measure you might call it. So something like the speed of light would be an example of that kind of thing. +* Another kind of example that comes up in everyday life, every day numbers would be say our weight on a scale or the length of a stick, that the number that we read on the scale of a ruler would indicate some kind of precision. So it has a unit. And it also indicates some kind of degree of precision as well. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/9] + +JMN: So if we look at this thing, we might be already looking at four different classes of things, which is already starting to be quite a lot. Actually we can expand the conversation and take a couple steps back and find that there’s even more possibilities here. So think about—I mean, you don’t have to go through this entire thing. You can think about binary64 or float64, the numbers that we know and love already in JS and talk about integers. Those also have analogues in JS with BigInt. And at base-10 and the bottom row of the table has four possibilities and those are the four. And think about integers, we can think we want to have some kind of BigInt with a unit or BigInt with some kind of precision or do we want to have float64 with units precision? You can see that the topic is getting as we take a step back, we see that there are many possibilities here. And the developer might think this is interesting, but it seems like we have a kind of proliferation of possibilities here. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/10] + +JMN: So do we really need all of that? I mean, the conversation is leading us down a path which suggests that there are lots of things to think about. But maybe everything could be expressed by a single class. Maybe we could have some kind of, I don’t know, unitless number or dimensionless number which has a unit like 'one', as it's usually called, or u. For instance, 2.34 is 2.34 with unit 1. And reduce the mental complexity here. And why not, say, express exactness by talking about infinity as some kind of valid precision. If we tag a number as being infinitely valid, this data as far as I know is exactly correct. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/11] + +JMN: But there’s a bit of a challenge with trying to pack all of that into a single class. We know again learning from Temporal that having separate types which vary that it has a lot of advantages. We don’t need to manually validate which information is present or absent and possibly throw in some kind of incoherent combination of data. We have type checking possibilities. And just generally adding information can limit capabilities. So if we think about doing arithmetic with these numbers, if we have more information, that means we can do fewer operations with it or fewer operations just out of the box and no thinking of checks. If we think about just numbers as numbers, then we can add them. So if 1.23 plus 0.04 is 1.27. End of discussion. That’s fine. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/12] + +JMN: When we start to add precision, things start to get a little bit fuzzier. I wouldn’t say incoherent but start to get a little bit trickier and now we have for instance 1.23 that has three significant digits and 0.040 with the zero at the end there, two significant digits, what do we do with that? IEEE does give an answer to the question but there are many possible answers that can be given there. And then we have silly things like adding 1.23 metres and some watts that is presumably some kind of incoherent addition. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/13] + +JMN: So the question here is if we add this information to our data, we have this so it suggests to us that we need at least two classes. So I don’t want to say that we have to have all of the data that we had on the table in the previous one. But making the argument to allow you to have two classes here. The thinking at the moment is that decimal at least in the normalized form so no precision tracking is a valid thing to think about. We have arithmetic there. That’s quite well defined. Basically 'just math' for lack of a better word. It would be based on IEEE-754 limits which means that there’s a fixed bit width for the numbers. 128 bits. That’s quite a lot. You can do quite a lot in the space. It is ultimately limited. Just to be clear, we’re not talking about tracking precision here. We’re really talking about values that are supposed to be just numbers. + +JMN: And the other class that we would suggest would be necessary is kind of measure with precision backed by a decimal, you might say. There’s no arithmetic going on there, at least not in the initial version of this thing. There could be conversion there and convert from feet to metres and a static notion of precision. That’s another way of saying that the precision of a value just is the one that you supply at construction time and that’s it. There’s no intelligence there. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/14] + +JMN: And how might we be able to convert between these things? We might say explicit conversion of measure to decimal. We might have some kind of static method of converting some kind of decimal value to a measure, and we might be able to take a decimal value and tag it with some unit then. All of this is just, again, just to get intuition flowing. This is not any kind of final API. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/15] + +JMN: The discussion is pretty much ongoing. I hope that I showed you that the measure and decimal proposal overlap or intersect in an interesting way. That suggests that we might be able to make some progress on both of them simultaneously or maybe even in a staged way. I don’t mean that all of the questions are solved there. There’s some interesting open questions. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/16] + +JMN: So here is one question you might ask yourself: do we need some kind of separate classes for different kind of data underlying the measure? Do we need some kind of BigIntMeasure distinct from a DecimalMeasure and distinct from a NumberMeasure? I suppose one way to think about it. Maybe you can have some kind of measure with a `.type` property. You can say this is a BigInt or decimal for the number. I don’t know. This is very much open for discussion. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/17] + +JMN: Another interesting question is whether we need something like a decimal with precision. So I made the case earlier that if we were to proceed with decimal, then we should probably have decimal without precision tracking, but that doesn’t mean that decimal with precision is a bad idea per se. That could still exist in this universe. So we could have some kind of decimal, generate some kind of precision there or set the precision there. We would have something like a FullDecimal and that could be converted to a Measure by tagging with a unit and so on. The suggestion would be that if we were to have this kind of decimal, that it wouldn’t support arithmetic, because as we have discussed here in plenary a couple of times, the IEEE-754 rules for propagating the precision or propagating what's also called the quantum of a number is similar to unusual prop. I mean, it is an approach defined and implemented of course. There are other ways of propagating precision and to avoid any particular one, you might say the full decimal if we were to have it at all would support arithmetic because it just doesn’t want to get into those discussions. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/19] + +JMN: So for us, the next steps this, this is something I want to hear from you is how to move this forward. I made the case that measure and decimal are distinct proposals but they overlap in an interesting way. And they are inter locked in interesting ways. So I might propose that one option would be to just keep them separate, but they’re somehow designed in a tight collaboration that is not exactly well defined. I think we have a sense of what that means. Another option would be to merge the two proposals, possibly now or at a later stage if they go to the later stage. I've prepared a README to show you what it could look like. That might sound a little bit preposterous but it makes some sense of what we’re thinking about. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/20] + +JMN: Just to be a bit clear about the details, EAO volunteered to talk about another approach. One of the things that we launched by the way in matrix you can see there’s a new channel or at least fairly new channel for talking about the measure and decimal thing and the kind of harmony that currently exists between these—you’re welcome to join that if you wish. We just started a biweekly call to talk about these things and biweekly call and recent discussions, we talked about the word for what we should use for measure. Perhaps other words are more suggestive and fit better to what we’re talking about. One of the suggestions I believe coming from EAO was amount. The thinking is that an amount is also a term that could make sense for a number of plus unit and possibly precision. EAO if you would like to take over, you can go ahead. + +EAO: I would be happy to. So, yeah, in the conversation around this, my view of how should we split this whole mess of things we have got is maybe a little bit different from Jesse’s, but this is why we’re presenting it and sharing the discussion with you. So we can maybe get all of this advanced a little bit. As I’ve been looking at this, I see a lot of overlap in the use cases presented for the Measure proposal and for Decimal but also a lot of divergences that they go to all sorts of directions. And then there’s also, in the background here, in particular for Measure, there’s the smart unit preferences proposal separately from this. And then in this context, I’ve been looking at what are the actual use cases and goals and so on that we have as in this group accepted for these Stage 1 proposals so far and really coming to a conclusion that is somewhat shared, I think, with JMN, that the split we have currently of these is maybe not the best one. It’s close, but it’s maybe not the best one. And maybe we would like to refactor a little bit how these proposals, and now also possibly the `Number.isSafeNumeric` proposal as well, how all of these interrelate with each other. But when considering all of these, I think we have got three different proposals or maybe four that we ought to have. But they maybe all ultimately could work on a single thing. + +EAO: And that single thing has, I think, a possible first step that unblocks, solves a number of the use cases and goals we have set out for these proposals, not everything by any means, but it’s also possible to work from there towards different directions. That would be to have this relatively opaque class replacing Measure, called Amount. That does not finish that would not initially in the first step at least include anything about any operations that you can do with it or any conversions that you could do with it. But it would include, in addition to an opaque value that separate fields for dimension and the precision that this value could have, if you could go to the next slide, please. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/21] + +EAO: Sorry for the weird gray background. That’s an artifact I think highlighter. This is the idea of what we could have as a first step building towards being able to do unit conversions and being able to do decimal sort of math later. And the idea is here to have an opaque—an amount with an opaque value that you can really initially get out as something like a toString and then have this work as intended for measure to work with `Intl.NumberFormat` formatting. It would include it’s own `toLocaleString` and feed the Intl instance `Intl.Format` formatting call and get a sensible thing out of it. One to note by the way it specifically has unit and currency as separate fields. + +EAO: One of the biggest overlaps that we do have for the measure and the decimal proposals is both of them say that we ought to have a good solution for how to represent money and monetary values in JavaScript and my biggest concern driving towards maybe we only ought to have a single class is that, I find that it would be confusing to a developer if we tell them that if you have got money, what should you be using? Okay, no. I believe that it would be simplest to be able to tell developers that if you have monetary value or something with the possible unit attached to it, then you want to work with an amount, then there might be operations on this later as there are proposed for decimal for addition and subtraction and other operations, but there also could be the kind of conversion factors that you can have here. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/13] + +EAO: Effectively what I think is that if you go back to slide 13, please, I think when I look at the issues here, the conclusion I come to is that, first of all, if it is a positive feature, if the result we come up with for doing math with real world numbers would give an error if you try to add meters and watts together and for the significant digits thing, I don’t think there’s any issue in how the operations work if we consider the significant digits math and the actual value math as separate operations. So the one place I think I don’t quite agree with Jesse is that I don’t see that the current setup requires us to end up with at least two classes. I think we would be fine with just one. + +[Slide: https://notes.igalia.com/p/tc39-2025-02-plenary-decimal-measure-unity#/21] + +EAO: But furthermore, I think the initial step—now you can go back to this slide—will provide support for going in all of the directions we can imagine for the Measure proposal and for the Decimal proposal and it would also solve the use cases put forward for measure with a note by the way that as I was reviewing this, I realized that I don’t think we have actually presented really well a use case for why unit conversions ought to be on Measure. The smart unit preferences does that relatively well, but the measure proposal for that I think we have kind of just asserted that unit conversions ought to be there, but we ought to do a better job of explaining why those are important to be available in ECMA-262 rather than just 402. But that’s it for my part of this. + +JMN: Thank you EAO and that’s about all we have today. I hope that we have sparked your interest in thinking about these proposals as being part of the shared space. The question is, what to do for next steps in terms of keeping them together or not? EAO mentioned by the way there’s yet another proposal, the smart unit preferences (https://github.com/tc39/proposal-smart-unit-preferences) that is also I would say part of this overall harmony discussion as it were. And that’s it. We are happy to open for discussion. If I might suggest, we do have a TC39 chat channel for these topics and there’s a regular call. I already know that there’s kind of super fans about this topic. We kind of already chat about these things. So if there are any comments from outside of the super fan club, we would love to hear them. + +KG: Thanks for the presentation. Just a quick note that part of this feels pretty weird to have decimal measures for imperial units. You might encounter a third of a cup, you don’t encounter 0.33 cups. That perhaps suggests this is not sufficient to represent common units. We might further need a rational type which maybe is a good idea. It does add a little bit of space here. Something to think about for the future. That’s all. + +EAO: Question to KG and anyone actually not necessarily for right now, it would be really interesting to hear of an actual place where the data for something like imperial units like this would be stored as something like fractions, rather than being stored as something like decimal or number that is then converted to fractions for display purposes. + +KG: I’m not aware of any. But certainly like if I was building a recipe website, I would reach for rational representation for cups because if you quadruple a recipe that has a third of a cup, you shouldn’t get 1.334 cups as the output. That’s just weird. So if you want to actually manipulate imperial units, and preserve them in the way that humans are going to expect to encounter them, you do actually need to use a rational representation. + +EAO: I agree. This is why I’m asking if actual shown real world use case of here, we have the data in actual fractions somehow, somewhere that is now clumsy and would be better to have an actual representation, because my suspicion is that the actual data for stuff like this is still going to be in numeric decimal, but let’s go on. + +JMN: Just as a quick response, I think one source that I’m familiar with that is coming from this imperial world is cookbooks. They often use fractions to represent. But surely there are others as well. + +JHD: Often I would say always at least in the U.S. without exception. + +SFC: It's a really good question. It’s also something that’s I think we should investigate more about does it fit in the table layer in terms of is it—given the correct choices for the precision of the number, like, you would take the number and display it with fractions even if it’s represented as a decimal inside of the computer is an interesting question to investigate. I agree that Rationals would be a nice representation in this specific case. + +DE: Just agreeing with SFC here, CLDR currently lacks data and algorithms for number of formatting with fractions. This is a contribution that Unicode folks would be happy to have upstream. If we want to work on this cookbook problem, the natural place would be to start there. You kind of need it to work end to end. Until then it’s reasonable for us to start with decimals that have been prioritized for data collection because they come up in a lot of different cases. So we could consider making something like measure future proof by being generic over units. But overall the shape of what that will look like is pretty far from the present. Do you agree with that Shane? + +GCL: I think this topic about measures is well motivated enough that it could be a thing that advances separately from decimals and I think it would be useful for it to not be tied to decimals specifically because there are reasons you might want to use other numeric types any way. I wanted to put out here I think this is a pretty useful thing especially for durations and sizes of bytesthat would like, I think, would be very valuable in the language. + +JSL: I also think this is what MF has raised but however with the units defined is a clarified question and a fix set coming from bar or extensible in some way and how to get into: conversions. How the conversion ratios is fixed. + +JMN: That’s also something that is up for discussion. Initially CLDR is something that we would like to support. They also define as convergence as different thing with the data that they provide. Currency is something that is a strong motivating use case for us. Even more generally might say otherly arbitrary units is also something that could be conceivable. So pick anything you want and say what it means. You could say what it means to convert from one thing to the other or block or I don’t know. I would say this is still an evolving topic. + +EAO: Just noting that currently for `Intl.NumberFormat` with unit formatting, there’s an explicit list of supported units. This is a subset of the units in CLDR which is also the source for any transforms between these units and convertibility effectively. But this does not necessarily need to limit what goes into a Measure or an Amount that supports conversions, but I would say that I don’t think for the very initial part of Measure or Amount we’ve actually presented the use case for why that ought to be in 262 and it might make sense for that part of the whole work to be considered as a part of an evolution of the smart unit preferences proposal rather than the Measure proposal. + +SFC: This https://github.com/tc39/proposal-measure/issues/10 is one link to initiate the topic of discussion here if you have any background or thoughts of this, you’re more than welcome to chime in. Issue 10 on the measure proposal repository. + +MLS: I think this has been somewhat discussed. CLDR does have units but I don’t know whether it does conversions between imperial and metric and so on and so forth. And since I have some time here, there’s also—if you’re going to do fractions, I think you need to keep both the numerator and denominator as values to be actually aggregate converting between decimals and fractions is troublesome given loss of precision and things like that. So it seems like this is going to require some dependency on some other database and a database that may not exist in standards form. + +SFC: CLDR does have a specification and a whole table of conversion factors. And presumably those are the ones we would use, although there are other databases as well with different roles handling things like rational is part of the operation and the CLDR rules are basically retain the rational throughout the whole conversion process and then flatten it after all conversion is applied and things like this. This is the space that the CLDR people have thought about. But your input is very much welcome I think again on that same issue, issue 10. + +DE: This was a very great presentation that laid out the public space cleanly, but overall it seems like having two classes, one for measure with precision and a unit and one for decimal without precision. It seems like the cleanest. We have seen in previous presentations by Jesse clearly use cases for arithmetic. But arithmetic would be quite difficult and fought when precision and units are included even when it’s useful in some cases. We can go either way on whether measure is specific to decimal or numeric. I think this makes sense as already proposed as two proposals. Maybe we want them to go to Stage 2 together, but as long as we’re developing them both with the other in mind, I think it makes sense to keep them that way. Whether we put something in 262 or 402, that’s an interesting thing to consider. But ultimately it’s practically editorial. And shouldn’t really affect much about the way that the APIs look. So I’m happy with the diligent work done here on all three proposals. I hope we can advance them. + +WH: I agree with Daniel here. The presentation here today mostly neglected arithmetic, and arithmetic on measures would be very complicated. There are plenty of use cases where we just want decimal numbers and you want to do clean IEEE arithmetic on them. It’s hard to define how square root works on measures so you can do basic geometry. + +SYG: I guess somewhat covered and just one of the questions is what we have for the units. The physical units is front-end JS web apps. Not sure about the service side than things like CSS units or you know computer storage sizes and stuff like that. What are your thoughts on how we decide the set of units that ought to be included in the language? + +DE: I think one of the first things you want to do with CSS unit values is calculations on them, which involve mixed units and fractions. I don’t think that’s something that we can cover in scope here. Lots of front end code involves communication with people about human intelligible quantities that is decimal and unit quantities. Although CSS is an important thing to consider, I think it would be really difficult to do a good job with these. The way that the measure proposal is framed right now is in terms of arbitrary strings for the unit. So people could use it to represent CSS units. But I’m not sure if it would solve most of the problems that people want it to solve. + +KG: I mean, isn’t it also that the first thing you want to do with most measurements is calculations? + +DE: So I don’t know if that’s true the same way. To do CSS calculations, it’s relative to the window or the context where it is and calculations on like converting feet to meters is not relative to something. CSS calculations you’re doing symbolic manipulation rather than being actual calculation. + +KG: I misunderstood your point, then. + +DE: When I said calc the particular CSS operator. + +KG: A lot of CSS units are not particularly relative. Like, vh is but pixel is not. + +DE: Pixel is kind of complicated also. + +KG: Yes. It’s complicated. A lot of units are complicated. + +EAO: So I think we have like multiple overlapping discussions here spanning how do we do arithmetic in general? How do we do—what units are supported and what unit conversions are supported between them and I think this is all to me pointing out that we ought to be handling this whole stack as at least three different proposals with an initial proposal that introduces something like an measure or amount that solves the use cases that we presented for measure so far a second proposal decimal allowing for operations on the real world values and other values that we want to allow for and a third proposal that introduces possibly new units and unit conversions between them. The smart unit preferences doesn’t quite do this at the moment, because nominally all that it is doing is introducing a usage parameter for Intl NumberFormat. But its effects are what is leading to us wanting to have unit conversions happening in a different way than just as a hack that you can get out of Intl NumberFormat. But I do think we do need to refactor these proposals and consider which of these kind of, for example, get a count of all of the use cases and goals that we have for all of these proposals and then decide which sets of those use cases can be solved in one clump and a second and a third clump possibly rather than requiring us to have all of this in this one inter mingled conversation as we have had on the topic and previous cases and this one as well. + +DE: The proposals are factored. What do you see as the goals needed for refactoring? What do you see is wrong with the existing factoring? + +EAO: The unit conversion stuff. We have not actually—if you actually look at what was discussed for measure, I think the October meeting, we did not actually agree then I think that unit conversions ought to be a thing that is supported. It was just asserted there that, given these other needs, therefore unit conversions must be included. And separately from this for the smart unit preferences that was introduced some years ago, they are also—there was no discussion about whether unit conversions ought to be supported at all. So we ought to have a proposal that actually proposes that we have unit conversion support rather than us just asserting that that ought to be the case. And I’m particularly calling this out because unit conversions on top of measure bring in the question of, if you do a conversion of a Measure value to a different unit, then what is the value expressed in that result? For that, we need some answer. Without unit conversions we can say the value is opaque. With conversions we need to have some representation thereof. That brings in the possible dependency on decimal and that brings all of this into one complicated stack which is why I’m saying we ought to have three proposals here, one for an opaque measure or amount, a second one for decimal operations, and a third one for unit conversions. + +DE: Okay. That sounds consistent with the Decimal class and Measure class as proposed. Ensuring that we design units conversion as part of 402 that we do a good job of that design and make sure it aligns with the other two. Is that an accurate understanding of what you’re saying? + +EAO: Somewhat yes. There’s a strong desire that this overlaps with the next presentation I will be given on stable formatting in that the unit conversion work ought not to be only a 402 thing so that we do not make it so that JavaScript developers who will use any tool they’re given will want to use the 402 tooling for doing the unit conversions that they want to do otherwise. + +DE: Okay. I look forward to that, understanding that argument. + +SYG: Are there applications that want something like the measured class today, and if so, what are they doing today? Is the question for the champions. + +SFC: EAO can address that. + +DE: You’re funding the Measure proposal development. What made you fund it? + +SFC: I'll prepare an answer, but in the meantime, EAO can shed some light. + +EAO: So my most direct and somewhat possibly selfish interest here is that something like measure or amount unblocks a lot of the issues for Intl messageformat. It does this by kind of fixing a bug in Intl NumberFormat where right now where we do have a need to format a currency value or a unit value, we’re in this situation where we need to give the unit identifier or the currency identifier in the Intl NumberFormat constructor and then give the numerical value that we’re formatting completely separate from this in the `.format` method on the object. This is leading to the situation where it becomes particularly for localization relatively easy if you’re allowing for representation of NumberFormating options like for example message from to but not just limited to message from to to the situation where it is possibly far too easy to introduce the localization bug into an existing implementation. I presented on this somewhat at the December plenary, but I could go into more detail if desired. I would be also interested to hear more from Shane and others who are interested in this work. + +EAO: But overall, the gist of what we’re looking for is that we provide a way of representing as a single thing a unit or a currency together with the value and the identifier for what this thing is rather than requiring these things to live separate lives. + +CDA: We have about five minutes left. + +GCL: Just a reply to SYG, I think probably most JavaScript programmers on earth have dealt with durations or timers and such, and probably a significant fraction deal with things like bytes and sizes of memory. Currency also seems pretty motivating but probably has slightly different usage patterns. But I think all of these are very, very common things that existing programs use just by, you know, bringing their own conversions, and it would be useful to have that in the language. + +SYG: Isn’t duration solved by Temporal? + +NRO: Yes. Probably not have time units in measure proposal given the Temporal array does a very good job of it. + +SFC: I didn’t expect this to be iterated with the use cases with the measured proposal but I think that BAN had an excellent presentation in the Tokyo plenary in October when he set out all of the use cases and I can reiterate those, if you like. + +SYG: Just to be clear, I’m not asking for distillation of how you would like to use it. I’m asking to be pointed to an example if possible of what applications do today, in the way that Temporal was very strongly motivated and it was very motivating, I think, to replace things like—what is the library called? Moment? Like, there was very clear demand in a bunch of userland solutions to solve this hard problem; therefore, it was a good idea to do Temporal. I would like to get a better grasp of what the ecosystem does today for its uses of other kinds of measures. + +CDA: We have a couple minutes left. + +NRO: I already spoke to this by saying how we would like the proposals to be structured. We didn’t hear other opinions, but actually among the champions of the proposals, there is significant disagreement on the direction to proceed, with some people preferring a single proposal and other divisions and other splitting in some other ways. We had in the past different levels of success on merging and splitting proposals and think of class fields or modules sometimes worked well and sometimes not the best idea. I wonder if people that were that involved in the championing proposals have opinions on how they would prefer this extended champions group to proceed: whether as a single proposal or by keeping the various pieces separate. + +DLM: I would like to say that we had some significant internal discussions about these proposals, and we’re definitely skeptical about having the decimal measure proposals merged. I think that there is, as SYG was alluding to, various amounts of evidence as to the utility and demand for the different proposals in the ecosystem. And in particular, I would say I’m personally and I think I have to confirm with my team and I think we would be very skeptical about unit conversions and as EAO saying as their proposal would be a good idea. In general, yes, I think I would recommend for the champions not to merge these proposals. Thanks. + +MF: I think that the proposals are covering different enough space that they need to be individually justified and I don’t want a potentially strong part of that proposal to carry the weaker part. I want it to stand on its own. I would rather have them separate. + +CDA: Okay. We have less than a minute left. Shane, can you be brief, please. + +SFC: Yeah, I would like to have more time to discuss this. I think it’s a very important point. But a point that I—I’ve definitely one of the strongest advocates for, you know, keeping these proposals together, and I think that the reason is because the whole is greater than the parts. I think the vision of having a unified solution to how you deal with numerics in JavaScript including things like decimal values of arbitrary precision, units of measurement, and so forth is very strong as a whole. It gives a very easy on-ramp to localization of values and a very easy on-ramp to be able to represent and store and transmit units of currency and measurement similar to how Temporal allowed us to have a string format to do this and talk to each other easily. I think that separating these proposals puts these into boxes that do not deliver the same opportunity of value that we could get by having a single proposal around them. The champions before this presentation agreed that what we wanted to hear feedback from committee, but given that no one has said this yet, I thought it was important to bring this point up: that some might feel that measure by itself might not be motivated, decimal by itself might not be motivated. But if you put them together, I think the union of the proposals is quite strongly motivated. + +CDA: We are past time. + +WH: I don’t see a single unified proposal working for this. If you want to do arithmetic on decimal numbers, you shouldn’t have to worry about unit conversions. The proposals are distinct enough that they should stay separate proposals. Now, it is very useful for the proposals to coordinate with each other. But they should not be one unified proposal, because arithmetic is so different from some of the other things we are talking about here. + +### Speaker's Summary of Key Points + +* We broached the possibility of merging the two proposals, given their conceptual overlap. +* We also argued that at least two classes (decimal and measure) are needed, and possibly a third. +* We asked for guidance from the committee about how to deal with these proposals, procedurally, given that they are, on the one hand, clearly distinct, while also having a strong overlap. +* EAO presented a concrete suggestion for a next step, arguing for an opaque “amount” (to be understood as a synonym of “measure”). + +### Conclusion + +There is little to no support for outright merging of the proposals from outside of the champion group. There was some uncertainty about use cases for Measure. Adding conversion between units (measures) is regarded as a secondary/separate concern. Apparent support for having at least two proposals, possibly three. There was concern that keeping the proposals separate might cause us to fail to see the value of the sum of the proposals. + +## Stable Formatting update + +Presenter: Eemeli Aro (EAO) + +* [proposal](https://github.com/tc39/proposal-stable-formatting) +* [slides](https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit?usp=sharing) + +EAO: Stable formatting is a Stage 1 proposal that was introduced to that stage in 2023, and a particular thing—so the motivation here to start with is that we have places in particular where we have capabilities that we are offering under Intl, under the APIs available there that are useful for non-localization use cases for non-locale-independent things and also for testing to some extent or that could be useful for testing, because right now as it is defined, the outputs of all of the Intl formatters, for instance, are anything, any string or formatted parts and array. We have no way of validating that any of these things work as they are. This led us to have the situation where we have capabilities that we are offering for developers that they do make abuse of, and this means that they are kind of living dangerously because we might change the formatting at any time. But on the other hand, because developers are doing this, it becomes very difficult to change any of the details about how in particular en-US formatting happens because it is used—the parsed output there is used for things. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_101] + +EAO: So the sorts of things we’re talking about here, for example, for now at the moment before Temporal is available everywhere if you want to format the date using year-month-day, you used to be able to do it in JavaScript directly and doing something silly using a local in Swedish that happens to use year-month-day as the date format. Right now it’s also possible to use the u-ca tag, e.g., `en-u-ca-iso8601` to get the formatting. It’s not clear whether that stays stable as well or whether that ends up with different separators being used. Another example is, for example, if you want to do format a compact number using SI metric prefixes you can almost get it to work using English that happens to give you a capital K for a thousand and for a billion, it uses the capital B rather than the SI capital G for that. And then also not just the formatters but the other places—the collator/segmenter on Intl have capabilities that are only available on the root locale and right now you can get the capability if you happen to use a locale like English which does not override the collation with any customization and it will not be the case forever. We have the locale things being used for locale-independent reasons. This is not really that great. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_106] + +EAO: The Segmenter is another example that there is a note defining the general default algorithm but still recommended that tailorings of these is used. I think the ICU4X implementation also uses this locale independent algorithm when segmenting. So how do we fix this? How do we make the situation better? + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_111] + +EAO: And when this was accepted for Stage 1, I presented two different solutions that we could approach. The first solution would be to identify all of the things that ECMA-402 Intl stuff can be and is being abused and we are providing capabilities there that were not available in 262 and then finding ways to make those be available directly in 262. For date and date formatting we have Temporal of course. But then for most of the other cases that we can think of, there is no clear way of, how do you work with durations? How do you get number formatting to be customized? How do you do segmentation and collation? What if you need formatting to parts, for instance? Formatted parts is something that is only on Intl stuff. So this is a direction we could go in to look into these solutions and kind of fine tune things for each of these. And the benefit of doing this would be that it would not introduce any known localized use cases into ECMA-402 that currently ECMA-402 does not have. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_116] + +EAO: But the second possibility that we could go into is to add a null locale to ECMA-402 and this would not add any new APIs, but it would allow for the use of the value null specifically as a locale identifier. That’s currently an error. And it would be canonized to the code ‘zxx’ that is used for "no linguistic content", not applicable. It would be nice if we could use ‘und’ but that is an overloaded term effectively. But the CLDR has a clear behavior for ‘und’ and the behavior is relatively well defined in a number of different environments but specifically ‘zxx’ is not defined pretty much anywhere but it is a valid locale code. But defining behavior for ‘zxx’ would not conflict with any definition of what would be happening there. Then what we need to do is define explicitly what happens when you use a null locale in the Intl APIs in order to make those APIs provide utility and to solve the abuses of those APIs. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_166] + +EAO: That’s what I’m kind of here to ask you to accept, the second solution of adding a null locale to 402. And to explicitly define what does it mean when you use a null locale? And for this to be the direction in which to kind of start working on what the Stage 2 of this proposal would look like. Now the rest of this presentation I’m going to run through a kind of draft of what the Intl APIs would look like with null locale. This has been worked through with TG2 and this is bare bones of ideas, but at least a starting point for what would be useful for users, what would not add data size requirements and what would be implementable or should be implementable relatively easily. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_171] + +EAO: So as I just said. This is what follows. So the Collator with a null locale, it would use the CLDR root collation. There’s a little bit of variance here because of exactly how the browsers that are currently running on this work. So this is another thing I should note is that, yes, this is called stable formatting as a proposal. But when talking about APIs that consume localized content like the Collator, Segmenter and Upper and LowerCasing, these APIs are not like completely necessarily stable. But what is presented here is the stablest possible thing effectively for them that is also useful to allow for at least for now the same sort of behavior to happen on different environments where this code is run. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_179] + +EAO: These are in alphabetical order. For DateTimeFormat the idea is to match as closely as possible whatever Temporal does. Because `Intl.DateTimeFormat` goes a little bit beyond the formatting you can get out of Temporal, we do need to define exactly the sorts of cases for how that works and what is the output of each. Details details details. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_201] + +EAO: For DisplayNames, which as a refresher is giving you the display name, a localized display name of, for example, languages and regions. It already has a behavior for falling back to the requested code or `undefined` depending on whether the `fallback` option is set in its constructor. For `Intl.DisplayNames` we always do fall back for locale. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_207] + +EAO: For DurationFormat it would be ISO 8601-2 duration. This is a string that also starts with the capital letter P, and then there’s a specific format for how you get the output out of that. This, for instance, is used in the HTML time element and possibly elsewhere as well. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_220] + +EAO: For lists, we would be ignoring the type option. That one is defining whether the list is formatted as an “and” or an “or” type of list. And the list items would either be separated by a comma followed by a space or just a space. + +EAO: For `Intl.Locale`, this is giving information about the locale, I haven’t sketched out what that would look like. That would need to be done better. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_233] + +EAO: For NumberFormat. NumberFormat does a whole lot because we kind of overloaded it. The whole idea would be that the numeric part of the output would satisfy the StrNumericLiteral grammar also. But then because you can do, for example, currency or percent formatting, these need definitions. For example, currency, the output would have a numeric value followed by a space followed by the ISO currency code. Note that this is different from what English usually does because most locales put the currency code after the value, and all of the proposed other things here are putting the code or other identifier after the value. So to match that, that’s the proposed solution here. Without percent, it would put the percent sign after. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_240] + +EAO: For unit formatting, this needs a little bit more definition for exactly for the short form of unit formatting. We can define that table of what the identifiers that would be printed would be. We can derive it from the SI units and units close to those—using, for example, l for liters and capital TB for terabytes. The short unit identifiers, that will need a separate table for them. And also noting that compound units, that’s, for example, meters per second or otherwise, would work with a slash between them. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_246] + +EAO: We also have `notation: ‘compact’` as a thing. These would use the SI metric prefixes that we have defined for the values that this affects. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_261] + +EAO: And for PluralRules, this would also return the other category, no matter what other options you give it and what input for select or select range you give it. + +[Slide: https://docs.google.com/presentation/d/14KQA1Gyy0reIyouHtzp5ofYRrcwRjkY6GajeknLWhg0/edit#slide=id.g32d75ca01ae_0_268] + +EAO: For `Intl.RelativeTimeFormat`, the result would also be ISO 8601-2 duration, but the prefix would have a plus sign or a minus sign to indicate the relative direction of the formatted time. This is valid in ISO 8601-2 specifically. + +EAO: And `Intl.Segmenter` would use the UAX #29 segmentation with extended grapheme clusters. Some details of segmenter collator, there’s an issue open on the repo for that for defining that more exactly how that goes. + +EAO: Then also we have a couple of places where we need to define the behavior in the `Array.prototype`.toLocaleString. That we need to have the definition as we use the comma as a separator. + +EAO: And toLocaleLowerCase and…upperCase string methods we use the Unicode Default Case Conversion for that. + +EAO: And that’s it. So a whole lot of somewhat Intl-specific implementation details here that we would need to polish up and put together into a Stage 2 proposal. But the key thing that I’m here to ask is that would it be okay to start proceeded with this proposal in the direction that I’ve here sketched out, or is there a need to either not proceed or to try and proceed in this other direction that the whole proposal has for it. When I discussed and raised this in TG2, it was like last week or two weeks ago, that group gave I think quite good support overall for “please let us proceed with the null locale direction on this one”. But that’s it for me. + +DE: So this is really interesting. This is a lot of stuff for us to define. I was imagining that such definitions would come from CLDR? Have you discussed it with CLDR upstream, are they interested in defining data for this? + +EAO: Not directly. The intent would be to explicitly define this behavior in 402 in order to ensure that upstream change in CLDR would not affect a change for our behavior. But it ought to be possible indeed to have the CLDR data for ‘zxx’ be providing the behavior as described here, so that the same pathways that we use for code right now could also be used for ‘zxx’, for the null locale as otherwise. + +DE: As you know, Unicode has many kinds of stability guarantees. I would prefer this be defined as something at the Unicode Consortium level with stability guarantees, and we are using that downstream. If we have notes or normative text in the ECMA-402 specification that indicates or repeats this information, that doesn’t seem bad, but I would prefer that the data be driven from the Unicode Consortium, unless they tell us that they don’t want to be responsible for that and they prefer that we’re responsible for it. + +SYG: I have a clarifying question. Possibility Number 2 does not require CLDR currently. Yet to use the null locale stuff mostly is still accessed via Intl. For an implementation that does not ship Intl, it would still not have access to the null locale? + +EAO: That I think depends on the implementation and how it decides to handle the requirements that we put on supporting 402. I believe it is currently technically valid for the implementation to support 402, but to support it only for a very, very limited subset of locales. For example, technically an implementation supporting 402 but only supporting the null locale would support 402 technically. + +SYG: I see. Okay, thanks. + +RGN: That was a great lead in because the kind of environments that Agoric care about would basically patch in only support for stability when it comes to formatting. This would allow an introduction of Intl specifically for deterministic behavior that we’re talking about here. I also appreciate that you drew a distinction between locale-independent versus stable. And I have a strong preference for the latter. It’s not clear to me that we get a lot of the benefits from this proposal if the behavior is locale-independent but can change over time, because then we’re right back at not having reliable consumption of the output. So strong support for it. I appreciate the distinction, and I specifically want stable, not just locale-independent, behavior. + +KG: So this seems like a good thing in general. And thank you for giving the presentation about what the behavior would be for each of these things, or for most of these things. It seems like for perhaps half of them there is some obvious canonical traits. For `array.toLocaleString` can do the same thing as array toString, that’s fine. For some of these putting the currency symbol after a space after the quantity of currency, like, to what extent is that the canonical answer how to do the local stable for currency. I would feel uncomfortable making arbitrary choices for any of these and assigning them to the sole canonical locale. If we’re going to be making arbitrary choices, I would be happier to have some other way of specifying things to ask for the particular behavior that isn’t locale-sensitive but also just ascribe canonical status to a particular region-dependent choice. And my question is, are all of these in fact canonical already or are they arbitrary choices that we’re making? + +EAO: Many of these are effectively arbitrary. Some are canonical, for instance, the duration formatting using ISO 8601 duration strings is effectively canonical. But the specific thing for example that you mentioned, the currency formatting, there are common practices for that. And when you look at the common practices across all locales, the common thing is to have the value followed by a space, followed by the indicator. Noting specifically, though, that because these APIs do support formatted parts output, it would be relatively easy to consume that output, in particular if it is in a known and well-defined order in which the parts come, and rearrange them for presentation if that is requested. + +KG: Thanks. I guess I’m fine with that. I do feel a little weird about declaring particular things to be canonical, but I see the value in it as well. + +SFC: There we go. I want to talk a little bit about the use cases here and how the use cases have overlap but also diverge. Three reasons why I think this type of proposal is motivated is, you know, because of course with this behavior that’s the title of the proposal, a lot of the issues we have seen previously about developers expecting that Intl APIs behave a certain way and then when that behavior changes because of language and locale changes, then their code breaks. So obviously that’s a use case. But then a second use case here is, you know, this idea of that we always discourage programmers from doing… a certain anti-pattern that I see people do all the time is like I have an application and I’m going to take screenshots of that application and check that the screenshots are consistent and then you upgrade Chrome and upgrade and they break. I call it the testing use case with the screen shot and using it as an example. That’s the second use case here. If you have an application that is fully plugged in with Intl and then you just switch the locale to the null locale, you can then have a certain variance that you can rely on for testing purposes. And then a third use case that has been raised in the TG2 meetings is this idea that, well, you wanted to have access to root collation and root segmentation that use rules that come directly from the Unicode standard that are not locale dependent and not possible to access because to use these must specify locale and any locale is support to tailings. The ability to access root collation and segmentation is not currently available and this proposal could make it available. + +SFC: Now, one issue is that all three of these use cases are all somewhat solved in this proposal. But all three use cases could also be solved in other ways. I personally think this proposal, given that it addresses all three, is a fairly narrow solution, and the fact that it’s a narrow solution is why it’s a decent solution. But it also means we have interesting questions about which of these use cases do we prioritize? For example, to discuss the previous point about number space unit, right. Also for durations, something about do we use the ISO 8601-2 DurationFormat? Like, this is really useful and it serves really well for the stable behavior, you know, value proposition that we have. Does it serve well for the testing use case? Maybe not, because I don’t know of any locale that would display durations in quite this form. It certainly wouldn’t be appropriate for—you know, especially in a long form testing things like do you have enough space to display your duration and things like this? Pseudo-localization is a better solution for the more problem. If you have this in the language, people will use it for testing even if it’s not the right solution. We can make it closer to the right one for the use case and doing number space unit, for example. I think that this is more—I guess my conclusion to this comment is that I think this is a proposal that solves a lot of different problems, and it might be good to sort of have a guiding principle about which problems do you consider to be the main problem that you’re trying to solve? And then use that to guide the specific behavior that we implement for each of these cases since we do have to look at each specific case. + +RGN: Speaking to your final question, I support approach Number 2 of the null locale/pseudo-locale type representation, and it makes a lot of sense, and it’s something I can see using. Thanks. + +JGT: So first of all, I think this proposal is great. I’m really happy to see it. I think to follow up with what SFC was saying, it addresses a lot of challenging cases today. The only concern I would have is it is pretty common to use undefined as a locale today when creating like an `Intl.DateTimeFormat` or other cases and only putting undefined because you have parameters like options need to be laid in there. For me at least, it is a little weird that undefined and null have very different behavior, and maybe that makes sense to the folks in this room, because we’re really familiar with those differences. But I would worry that less experienced engineers get tripped up with that. I was wondering if you considered alternate names that are strings, an actual locale name instead of null. That’s it. + +EAO: So specifically the string that is proposed to also work as an alternative to null is ‘zxx’. The reason why I’m proposing to also support null here is that ‘zxx’ is really hard to remember, and it is completely opaque to kind of “what does this mean”. And to a reader seeing an explicit null would probably more clearly indicate that no locale is the message being sent, rather than ‘zxx’. But in a situation where there is a potential or a perception that confusion could occur, ‘zxx’ could be used to explicitly differentiate this from undefined. + +JGT: Are we prohibited by the sort of syntax rules of that to use like a string like ‘stable’ or ‘unknown’. or something that doesn’t look like a locale and is more discoverable for people who have never seen it before? + +EAO: There are possibly some issues, in particular, I heard from the ICU4X team of introducing something other than what looks like a locale here, because that would end up impacting a lot of what they can do in terms of optimizations around locales. + +JGT: Makes sense. Thanks. + +NRO: We talked about this internally at Igalia and we have different positions. We did not share position. Personally I find it weird to have null for different behaviors and they are same with nullish coalescing and different with parameters. We should really try to avoid more cases for the difference. But on the other hand, we understand EAO’s point that ‘zxx’ is a similar random string, you are only going to know that the string exists if you know that the various ISO 629 codes work. + +LCA: (via queue) +1 + +SFC: Just to note we discussed this definitely on the TG2 meetings how currently the undefined value has behavior that is basically equivalent to the string ‘und’, that is definitely I don’t think something that anyone had actually intended, but that’s currently the web reality in all major browsers. + +KG: Does that string do anything? Is there a locale corresponding to the string? + +SFC: Well, there is but no browser ships it. So the `und` locale falls back to the host which is also what the value undefined does, fall back to the host locale. There’s correspondence there. And then the null locale corresponds to the other special string. So I don’t know if that changes anyone’s—I’m just making an observation to add to the puzzle. And the reason undefined is special because it maps to a specific value that also starts with the same three letters, whereas null does not map to that because it doesn’t. I don’t know if that changes anyone’s position. But throw it out there. + +NRO: This is a random idea, but if we don’t want to do null because of the confusion with undefined and we worry about the string, can we use something else and have a well known Intl single value and well known symbol like `Intl.StableFormat` if we pass this first argument? An example worth considering. + +SFC: My comment here is the proposal is to add the ‘zxx’ locale, right? And then null is basically an alias to the ‘zxx’ locale. If you look at it from that perspective, I don’t see there necessarily being a problem that we just add an alias to it. + +KG: I didn’t understand the answer about why we wouldn’t use longer strings. Something about optimization in ICU4X or whatever, but that seems like it’s lower down the stack. Like, surely in the JS part of this before calling it to the library, you can template that string to whatever other string you want. It seems like we should consider that space still open. Like, if we think that the string stable is more clear and discoverable than anything previously discussed and like just requires the translation at the boundary to some other thing that the underlying Intl library understands, that seems like that might be the best solution. I would like to consider that space open. I don’t think we are necessarily deciding on the exact way of getting the stable formatting right now. But I would like one of the possibilities to include particular non-locale-looking strings, unless there’s some other reason not to do it or I misunderstood what the reason not to do it was. + +EAO: So on that one, right now we are in a world where locale identifiers are becoming more and more regularized, and this means that the language for locale identifiers almost universally uses a two or three character primary tag and then subtags after that, that fit very well defined mold by now. And this overall does, yes, support grandfathering in tags where the language identifier is either two or three characters or five to eight characters like, for example, stable would happen to fit in there, but it would be really great if we were not to introduce effectively a requirement for supporting that sort of locale identifier into the world. Also noting that there is a need, for example, as noted here for the Intl collator at the bottom, which is why I moved to this slide, for adding subtags even possibly on the ‘zxx’ or null locale, and having a string identifier for that in addition to ‘zxx’ would certainly create an expectation for the subtags to work on the longer-form string. If we do not want to go with something like null as an alias, something much more discoverable would be like a well-known symbol like, was it `Intl.Stable` that was previously recommended? But I would want to push back against a longer string identifier here as an alias for ‘zxx’. + +KG: I’m not suggesting that you would introduce a new tag. I’m suggesting that one of the valid inputs for this would be a string which is not a locale tag, which at the precise boundary of the API is treated specially, if you say the Intl is treated differently as any other string and perhaps translated to a particular central understood by the other library or something and translated to ‘zxx’, I don’t know. But I’m not suggesting the introduction of a new language tag ‘stable’, I’m suggesting that one of the inputs to this be the string ‘stable’. It’s a different thing. + +RGN: Responding to a point that SFC made earlier, I disagree that the ‘und’ locale is equivalent to the undefined input for locale, because ECMA-402 privileges and looks for the value undefined and it’s what you get for instance if you provide an empty list of locales. Whereas ‘und’, at any point in time an implementation could start shipping and supporting it with behavior that as of that point would be different from undefined. What undefined does in ECMA-402 is defer to the current default locale. ‘und’ is not guaranteed to have the behavior. + +PFC: I’d like to build on what DE said a while back about recommending that this null or ‘zxx’ locale be defined by CLDR as part of that dataset. I think there’s a really good reason to require that it is defined as part of the locale dataset. The reason is, in Test262 we have been very interested in how to write locale-sensitive tests for functions like toLocaleString and the classes that live on the Intl object. And locale data can be updated as understanding of best practices changes. So it’s difficult to find a balance between writing your tests to compare to a certain output, but also anticipating that the desired output might change over time as the data gets updated. So this null locale would be very helpful for writing tests like that. And if we defined it in the spec so that it was a special case, so that the formatting was defined outside of the CLDR data tables or whatever, then there wouldn’t be much point in using that for testing because it would be testing a separate codepath in implementations. So I personally think it would be better to require that the data source is defined elsewhere outside of the spec. + +EAO: As I reply to that, just noting that I do believe that the current sketch of a proposal for these APIs, that the formatting behavior presented there, should all of it be representable and implementable in CLDR. The intent with the presentation of the direction here would be for possibly us to define what makes sense for JavaScript. How should JavaScript work in each of these cases, and for the implementation side of that, either to, yes, go to CLDR and get agreement from them about those behaviors, or then ensure that it’s possible to overlay custom data on top of CLDR that ensures that this exact behavior comes out of it. If it’s not possible to get that already directly within CLDR and to have sufficient guarantees about stability from CLDR… I do not believe that the CLDR currently, for example, guarantees that specifically the patterns and the formatting and so on for any locale are as stable as we want to have for ‘zxx’. Therefore, my initial desire to have the behavior be defined in 402, but to have the implementation, yes, coming through the same pathways that other formatting uses. + +SFC: This is my comment about a stable Collator. A comment that I think RGN made that I just wanted to be clear about is that `Intl.Collator` and `Intl.Segmenter` one of the cases is get at the root collations and root segmentations, and it is worth noting that these are not necessarily hundred percent stable, and when Unicode adds new code points or emojis or scripts, the behavior here will change. Because previously if you had text with a certain script, it will sort differently. Because previously those were undefined code points. It can also be the case that Unicode—this is not CLDR, but Unicode—will discover something new about a script that previously existed, and I know there’s been a lot of changes going on with the Mongolian script and so forth and the collation rules and segmentation rules might also change. I just wanted to clarify if that’s a concern. You know, one path you can take for collation, if you’re using the ‘zxx’ locale `Intl.Collator` left the graphic sorting on the UTF 16 bytes. That is stable. But it’s also not the root collation that we would like people to be able to access. So I just wanted to probe if that’s a thing that we should be considering. + +RGN: I think yes, and yes it is a concern. What is valuable here I think is not access to the root locale, but access to deterministic stability. Access to the root locale is itself valuable, but shouldn’t be mixed together with the concept of stability. So for collation, for instance, I would expect it to be strictly based on codepoint value and therefore not change when a codepoint shifts from undefined to being associated with a character. + +SFC: Follow up with that, you feel the same about segmentation? + +RGN: Yes. + +SFC: UAX#29 segmentation will change, and the grapheme clusters will change. + +RGN: So there are two different kinds of change there. One is because UAX#29 segmentation is dependent upon classification of characters, you know, what category they fall into, that a new version of Unicode can change. And therefore that would have an impact on segmentation. That to me is just part of the progression of Unicode as a simple collection of characters and is not concerning, because there’s a whole lot of other things that come along with that, and you already have a dependence in the form of regular expression property escapes. The second kind of change would be of the rules themselves, a revision of UAX#29. And for that, I would hope that, no, we would stick with stability. We would actually snapshot a particular revision of UAX#29 and commit to that for all time in this stable behavior. + +SFC: It would be difficult to implement a forever stable UAX#29 segmentation rule, but I think this is something that we can discuss later. We had time on the time box and we can continue to probe this on the TG2 meeting. + +EAO: So noting, yes, what SFC said and was discussed, another way of putting that is that this proposal specifically provides for slightly different behavior. It provides for stable behavior for everything except for `Intl.Collator`, `Intl.Segmenter` and the toLocale{Lower,Upper}case functions because those are consuming localized data, and for these, there is very useful sort of root locale behavior that is currently not accessible and it would in general be very useful to have it become accessible, but that yes, relatively stable but not completely stable behavior is not, well, stable. One thing that could be explored here possibly is introducing a way specifically for these Collator, Segmenter and toLocale{Lower,Upper}case would be the way for these to access the root locale explicitly using either the string und and what CLDR and Unicode use for it or some other method that we probably ought to discuss in TG2 further. + +RGN: Agreed. I strongly support that. And in particular support it in a way that is distinct from requesting stability. + +EAO: And if implementing something like that, in particular for something like `Intl.Segmenter` it is—there is utility I think, high utility in being able to access UAX#29 segmentation, but introducing a requirement at the spec level of always supporting a very specific version thereof, seems like it’s introducing a cost that is maybe not worthwhile. So for something like that, the `Intl.Segmenter` with a ‘zxx’ locale should be doing something different, which is a topic we ought to be discussing later in more detail. But none of the proposed things for the Intl APIs here is meant to be the final word, just the best guess so far at what ought to work and what would be useful for developers and users. + +RGN: To be clear, if it is not fully stable but partitioned in this way, then what you’ll see is that an environment which needs determinism would just exclude support for the unstable APIs. You would have for instance `Intl.NumberFormat` but not `Intl.Segmenter`. In a strict technical sense that is not actually supporting ECMA-402, but in practice that’s just what you get. + +USA: If that was it, then I think EAO you could dictate a summary. + +### Speaker's Summary of Key Points + +The alternatives were presented, and support was given for introducing a ‘zxx’ locale for stable formatting. The proposed alias null for ‘zxx’ was discussed, and some concerns were raised about its closeness to undefined, which has different behavior in this API. Alternatives to null as an alias that were proposed included a well-known symbol (`Intl.Stable` I believe it was), or a longer string identifier. The non-stable root locale behaviors on `Intl.Collator`, `Intl.Segmenter` and string toLocale{Upper,Lower}case were discussed as distinct from having stable behavior, and further discussion will be required for determining how to make those APIs be able to access the root locale rather than exhibit stable behavior. + +### Conclusion + +* Stable Formatting [PR #18](https://github.com/tc39/proposal-stable-formatting/pull/18) can be merged. +* The alias for ‘zxx’ will need further consideration +* If ‘zxx’ explicitly means “stable”, we may need another special locale identifier for the root locale. + +## `Error.captureStackTrace()` for Stage 1 + +Presenter: Matthew Gaudet (MAG) + +* [proposal](https://github.com/mgaudet/proposal-error-capturestacktrace) +* [slides](https://docs.google.com/presentation/d/1SFdS9n5JR7Jqz29s7ApvkqDOqOfPW-IaBR2orK828As/edit?usp=sharing) + +MAG: As the title says this is the proposal stage 0 to 1. `Error.captureStackTrace()`. So Chrome shipped `Error.captureStackTrace()` a long time ago. I don’t actually have an original date. But can I find reference of it as early as 2015. It’s been around for a long time. It was a Chrome only API and didn’t pose much in the way of web capability issue because if somebody tested in Safari let’s say, they would catch this problem. However, in August of 2023, JSC/WebKit shipped the method. Now in order to avoid web interoperability issues, we will ship it. I have an implementation and just need to have the time to unflag it. Maybe we should spec it with three engines ship it. That’s why I’m here. The documentation what this thing is and what does it do is largely contained in the V8 stack trace documentation. + +MAG: You give `Error.captureStackTrace()` some object. This can be any object and it will apply a stack property to this in some manner that will give you the current stack. So you can just give it an empty object and pass and call error capture stack trace with the error property on it with the current stack. There’s an optional object called constructor that allows to elide frames that came before this. If you see a stack, you will not give any frames until you see this constructor. If you give it to something that hasn’t been called you get an empty stack. There is some divergence in the implementations. So, for example, V8 applies and installs a getter property. JSC defines a string-valued property. We are following JSC right now but this is a point of discussion. + +MAG: Should we treat objects which have existing `[[ErrorData]]` slot any differently? Right now the answer is no. So if you, for example, have an object and you delete the—you have an error object and delete the stack property, if it’s on the object, it’s on the prototype, you hide it. And then you apply error capture stack trace and now you have an own property that is stack and then you use the maybe captured stack getter to check what the original stack trace is, is it been censored? This is an implementation decision that could decide we could spec if we decide to spec it. That’s really it. I mean, this is a Stage 1, 0 to 1 proposal. There is this thing. There’s not a lot of design space here. There exists implementation that’s been around the web Chrome only passed for a decade. Probably don’t want to change too much. How we should spec it, we have two different choices. Really right now the ask is should we do Stage 1? With that, I open it to discussion and questions. + +JHD: I mean, I’m not sure if it was common knowledge in this room that JSC shipped it that long ago, but is there a reason that they need it? I can’t click on the link from the slide I’m looking at with my eyes. Why was it shipped in the first place in JSC? + +MAG: So the stated reason and if KM or MLS is on the call can maybe weigh in on this but KM weighed in by doing this it made a benchmark called web tooling benchmark four and a half percent faster. We did also see a small improvement on this same benchmark. We are shipping it because we get a slow trickle of bugs that the website didn’t test in Firefox and now is broken. + +JHD: Right. So I understand if it exists in two browsers why the third browser must ship it. I’m questioning that the precondition there or the condition there, like, if the only reason to ship a not even fully compatible implementation for JSC was to make a benchmark a little faster, can they just unship it? That implies that the web didn’t depend on it. + +MLS: So why we want to unship it now at this point if we had it out there for coming upon two years and we are web incompat issue and Chrome uses it and we ship it in Safari and more people use it. We basically break ourselves. I don’t see much motivation doing that. + +JHD: It’s already different and some aspects that won’t work the same any way. + +MLS: The stack traces are different and we have the API. + +JHD: The contents. The getter versus the string property. I mean – + +MAG: Don’t really know the difference. + +JHD: If that’s the case, that also should mean we’re free to specify either choice and V8 and JSC should be able to match it. If people can’t tell the difference, it’s not a web reality issue. + +MLS: Depends on what level of difference developers are willing to tolerate. But now he wants to do some standardization of this and then we have some discussion of what implementations that are already shipping it would be willing to – + +JHD: I guess as a reasonable reply of why JSC would choose not to ship it because it’s just creating bugs, right? I get that. But that still tells me that there is some design space in fact for what it does and exactly how it works in the sense that there are three different sets of behavior in the three browsers right now and separately the fact that JSC shipped it for no reason except to motivate a benchmark is what it seems and it means it wasn’t users asking for it and then I don’t know if it was—how it was announced it was shipped or if anyone even noticed because it was news to all of us in this room. + +MLS: We announced it on our preview version all the things that we add. But we didn’t broadly announce it. It wasn’t a standard, right? + +JHD: That’s what I mean. I wonder if anybody even noticed. And so – + +MLS: I think they would notice—we would notice now because we get bugs because now you don’t have this anymore. + +JHD: Okay. + +DE: For this conversation between JHD and Michael there are multiple ways of defining and analyzing web compatibility. There’s the theoretical way used a lot in EA-6 and talk about intersection semantics and something is not supported by all the browsers or most of the browsers, it doesn’t really count for web compatibility. That was used to have Annex B 3.3 for sloppy mode function hoisting and something that was web incompatible and the browsers had to fix it. The way that browsers analyze web compatibility is more empirical and more what might actually be going on than this abstract intersection between all the browsers. One empirical thing that happens is a lot of websites target the mobile web that is WebKit plus Chromium, unfortunately. And if a function is there for a couple years in the mobile web and people can depend on it, it’s quite likely. So, sure, that’s theoretically unmotivated that the mobile web should be a thing to maintain compatibility around but it’s practical. + +DE: Overall the burden of proof when thinking about the web compatibility things is kind of I’m the one that wants to change something that is already shipping. Because it ends up being a lot of work for browsers to either investigate further whether something would be compatible or to ship it and see if something goes wrong. So our standard—I mean, our default position should be just not actually changing those kinds of things rather than the default position being you haven’t prove it’s really necessary or something like that. So I think it’s a little bit round about. That’s all. + +JHD: Regardless of the outcome of this proposal and this item, it would—I’d like to request that all the browser implementers in the room, if they’re going to ship something that’s not in like HTML, 262 or 402, the sets of standards that we would consider to be standards, please bring it to this body first just so we’re aware of it. + +MAG: That’s why I’m here. + +JHD: That would give us—thank you MAG. That would give us the chance before two browsers have shipped it, we can figure out if there’s a thing we need to standardize and specify in a way that avoids compatibility problem and make sure all the browsers ship at the same time and make sure all the other ones if it’s already in one. I guess I’m just asking that if you’re going to ship something that is on-standard that you kind of give us a heads up. Not asking for permission. That’s not the dynamic. But just letting us know. + +SYG: Which item is—I just entered that one. I meant to go after MLS and the what. + +CDA: It was abrupt. I thought it might have been a response to what was being said at the time. So so if you want to talk about the benchmarks. + +SYG: I hope to impress on the room there that JHD I think it’s important to realize that benchmarks is not some flimsy reason things get done. It is one of the deepest fundamental reasons that anything gets done in JS VMs and if you frame it it was just for a benchmark and you can undo it, that is almost never the case. + +MLS: I’m next. What drives our development is benchmarks and features and security mitigations or security features. Then the question that somewhat rhetorical but we’re going to ship something and anything this there was a defacto standard and we’re going to ship something that we come up with and something that we come up with is not standardized. Who do we communicate to? And if it’s JSC only or Safari only, you know, which we do, and then we think we standardize it and be able to access pay on the web page, I think we’re champions of that. So are you saying that we should at every plenary say, okay, in the last two months we shipped these features that are not part of 262 or 402 or anything else and we shipped these in JSC, we want to know. + +JHD: Yes, if it is a JS feature. If it is CSS then the CSS group may be a better place to bring it. Yeah, if you being any one browser think it’s worth shipping a thing that’s not part of a standard, there’s a motivating use case behind it. It may not matter to the rest of the group and it may not be—maybe you’re trying things out and not standardization, that’s still fine. But the whole point of this sort of collaboration is that we can get input from perspectives that we may not have considered. + +MLS: Should access and battle done the same thing? + +JHD: Ideally, yeah. + +JHD: I mean, I’m not asking for a requirement. I’m not saying everyone must do this. I’m requesting in the spirit of collaboration, that there be maximum notification especially when it’s early enough in the process that things can be caught or changes could be made. The deviation on the previous – or one of two slides ago could have been – + +MLS: In this case, you have Chrome shipped it for like ten years. + +JHD: Right. So the existence of it in Chrome is not a surprise. And everyone knows that. + +MLS: And in this case, it seems like unwarranted we would need to, you know, by the way, we shipped something eight years after Chrome shipped it – + +JHD: I mean, there’s three browsers. When two ship a thing, that’s a meaningful ship and it would be hopeful we all have the opportunity to be aware of the possibility before it is too late? + +CDA: I’m just going to interject real quick. I think that’s a great topic, the conversation about keeping folks in the loop. I see DE’s comment on there. It does strike me as a little bit orthogonal to this topic. So maybe it might be best to move on. + +DE: Do you want to decide who is the next speaker, then? + +CDA: I don’t want to completely stifle the discussion. But SYG please go ahead. + +SYG: I think if you think this whole discussion is not productive for now, I would rather we go on to the actual—the next new topic. + +CDA: Just be clear, I think it’s a really interesting topic. I just feel like it’s maybe deserves its own—we’re here talking about error capture stack trace and not the meta problem of, you know, browsers shipping things that might be of interest to the committee. + +MM: I just want to make the distinction, I think JHD and I are aligned on, which is the thing that distinguishes whether it’s so to speak on JavaScript versus a host thing is. “Is it a property or behavior on a JavaScript intrinsic”. In this case it’s obviously a method on the error constructor and on the JavaScript intrinsic and agree with JHD, there’s no requirement here, but it would be very helpful and I just want to point out that if the thing that’s been in V8 forever that’s supposedly is no surprise and therefore nobody would have particularly benefited by being informed or hurt by not being informed, that case turned out V8 recently changed the data property from their created by their prepared stack trace from their own data property to their own accessor and that caused us and companies that collaborate with us to have to do a mad scramble around an introduced insecurity that took us by surprise because nobody thought it was interesting enough news to inform people about. And turning it into an own accessor was a real disaster and still a security problem for us that we cannot fix pleasantly outside of the language. Yes, please if it’s on an intrinsic, then it potentially is interesting to many people here. + +DE: We can skip. This is an interesting meta topic for later. + +MM: So just sort of closing the loop on the same point in another way, with regard to capture stack trace, I like the direction that this thing is going with regard to it making an own data property, I just want to say that our position is we would not accept this proposal if it were creating an own accessor property. We like it as an own data property. + +SYG: One of my greatest regrets is having to deal with `Error.prepareStackTrace()` is that do you want to standardize that as well or just `Error.captureStackTrace()` that magically makes a stack trace property? + +MAG: Yup, I have zero interest in trying to pursue `Error.prepareStackTrace()`. In the absence of it become web compat problem, I don’t plan to look at it. + +SYG: Sounds good. Okay. + +CDA: That’s it for the queue. + +MAG: Implicit ask being for Stage 1. I would be willing to push this forward on the data property direction. Any objections of support? + +JHD: Sorry. I wanted to talk and didn’t put it on the queue. But can you go one slide more. + +MAG: I will attempt to. + +JHD: So the error prototype stack getter is the next topic on the agenda. So I can talk more about that during that item. But I would say, yes, is the correct answer here, the capture stack trace isn’t monkeying with internal slots, it’s just installing a data property. + +MAG: That’s what our implementation does today and it makes perfect sense. And it is a design that people may have different opinions, but I agree. + +JHD: Yeah, with that on the record, better to ship it than—Stage 1 is fine. Web compat is a problem. + +DE: I think this is a really good proposal. There’s a lot going on with errors that is hard to unify between browsers and the things that JavaScript engines in general and the things that we can, we really should specify. It’s great to get capturing the stack trace faster. We have use cases inside of Bloomberg where we want to capture the stack trace and turn errors lower. So I support Stage 1. + +MM: So, yes, support Stage 1 with data property. + +MAG: With that, I can cede my time and I can jump ahead if we want. + +CDA: Any objections to Stage 1 for captured stack trace? You have Stage 1. All right. + +### Speaker's Summary of Key Points + +It sounds like most people would strongly prefer that 1) this produces a data property 2) That this does not interfere with or touch the `[[ErrorData]]` internal slot should it exist. I will pursue those choices. + +### Conclusion + +Stage 1 advanced + +## Discussion about shipping non-standard features + +CDA: We have like eight minutes left and now I realize I sort of stifled a little bit of the discussion on the topic of vendors and JavaScript implementers shipping things and notifying the committee; if that’s a topic folks would like to return to, we do have a few minutes. + +JHD: I will say I’m not proposing a process change. It’s just a polite request. There are people here who care and would like to hear about stuff—non-standard stuff before it gets shipped. It’s fine if there are people who are here who don’t care and don’t think it’s valuable. If you don’t want to take up plenary time and throw the issue over the wall on Reflector or drop something in Matrix or finding a way to get heads up would be a courtesy highly appreciated. I don’t know if there’s more to discuss beyond that. + +DE: I think DRR has had a good model in TypeScript in explaining what kinds of features are coming sometimes to the committee. This is useful so that for various different, you know, JavaScript super sets and JavaScript with the extra APIs or extra syntax, it’s useful for us to know what’s going on whether it’s before or after shipping, you know, obviously earlier is kind of nicer, but sometimes it feels too early. Of course that’s up for whoever is doing the presentation. It’s really important that this is not understood to be time to object. Otherwise, we’ll just scare away presenters. But I think even though traditional feels a little bit off topic because we’re always discussing proposals advancing stages, just having presentations about what’s going on will be really helpful for the committee. + +CDA: Any other comments before we move to the next topic? + +MAG: I am curious for JSC and V8, are there non-standard things that you’re shipping or planning to ship? Like, at least for us for JavaScript, the only non-standard stuff we got is very internal facing, so it’s not exposed to the web and exposed only to developers within ma zilla and what is the plan for shipping or plans to ship non-standard stuff or behind nonstandard trials and other boundaries to stop it from escaping containment? + +MLS: We typically don’t ship non-standard features. We tend not to do that. Most of the work we do is features and security mitigations and performance tuning. It’s rare that we ship something that is not standard. SYG had a comment that if you want to see what is going on, there’s email lists for both Chrome and for Mozilla and the STP release notes, be sure you put this change, that change in the release notes and every two weeks we get to add things to our—I understand what this is, is this okay? Yes. Okay, you describe this in a way that makes sense? + +SYG: There’s nothing new on the pure JS side that we’re planning to ship that is nonstandard. At this point I think anything new that has observable behavior poses too great an interop risk and, that said, V8 shipped stuff in the distant past that we continue to live with like capture stack trace and there’s also, I think, what is it called `v8.BreakIterator` or something that was superseded by `Intl.Collator` that we would love to remove but unfortunately people still use it. So there are examples of things in the distant past. But we’re not planning to ship anything new that may have any observable behavior that would pose any interop risk. An interesting thing that we may ship and that is in OT right now is this compile hints thing that is purely for hinting when to parse something to improve startup speed. And there is nothing observable going on there. And this is a thing if you were in Tokyo that my colleague Marja (MHA) presented back then. + +RGN: `Intl.v8BreakIterator` was superseded by `Intl.Segmenter`, but still exists. + +SYG: We would love to unship it but have to wait for the use counters to go down. + +DE: So I’m glad you presented on parse hints. Even when something is expected not to have interop risk by parse hints can still be interesting and hopeful for everyone to present it to committee. I hope that as this or other features of all, you can bring back to committee for future discussion, I also note that parse hints are very important to have tooling adopt for their effectiveness and TC39 is a great way to be in touch with tools to get visibility. + +JHD: Yeah, so to completely echo everything DE just said and I think that if a thing—the whole point of open source is that the more eyes see a thing, the higher the chance the problems will be caught and things will get to a better direction. There’s obviously too many cooks check and balance there, but, yeah, I would love to see more early collaboration even about things that have no interop risk or not expected to be used in other engines. + +### Summary + +The committee discussed preferences for notification when implementers are shipping features that don't currently exist in the language. + +## Error Stack Accessor + +Presenter: Jordan Harband (JHD) + +* [proposal](https://github.com/ljharb/proposal-error-stack-accessor) +* no slides + +JHD: So error stack accessor. I have had the larger error stack proposal going for nearly a decade now. Some of the feedback I got the last time I brought it, I discussed that was the last in-person plenary or perhaps the previous one, was to try and split it up so that each piece the standardized existing stuff piece and the add new capability piece were able to be discussed and implemented and advanced separately. I’ve done that. This proposal is attempting to only standardize effectively the stuff that’s already there. The spec here is hopefully pretty straight forward. Basically it’s an accessor property on the prototype. It doesn’t belong on individual errors. The getter throws if it’s called on a non-object. For web compatibility it is not called on the error object, it returns undefined, and implementation-defined being the magic spell of browsers can keep doing exactly the same thing they’re already doing and not trying to step into that minefield. The setter, for the same web compat reason, throws on a non-object and throws the argument and will always get one and if not an error object and sets own data property on the error instance with whatever you pass into it. So it shadows the getter. The getter will continue to work if you borrow it and dot call it on the error object you still get the original stack. That is how all accessor properties on prototypes that reveal internal slot information work in the language when there’s a shadowing owned property on the instance, the accessor when borrowed still has the factor and that is important for the language and that is the capture stack trace and if you use capture stack trace and provide alternative stack by eliding frames that is still just the shadowing on the getter property and the getter can pierce through that and return the slot value or you know it’s not actually a slot value because it’s not stored in the error data slot. But that’s also a bit of a hand wavy thing because it’s very complicated to try and figure out how one would store that thing in the slot without also having to describe how one constructs it and what its contents are, and that is something that separate and future proposals should be focusing on. So I’m keeping that out of this one to try and meet the feedback I got about splitting up the proposals. + +JHD: There are still some open questions that will need resolution before advancing beyond Stage 2. The answer to them will be some combination of “do the research” and “what is the union of what browsers already do”? What would be the ideal behavior? Is it possible—like, web compatible to change to the ideal behavior if it’s different and are browsers willing to make that change in that case? Those are a lot of ifs that will likely result in it just more or less matching what the majority of browsers already do. But these are perfectly acceptable and expected open questions that can be resolved within Stage 2. + +JHD: So I am hoping to advance to either Stage 1 or 2. And I would love to hear any thoughts on the queue before I ask for that. + +DLM: So we started collecting some telemetry on what was proposed a few weeks ago and the initial results are positive and everything that JHD mentioned would be web compatible. These are results of nightly builds that aren’t typical of the user base. But I think this is a good idea and definitely support it for Stage 1 or 2. + +SYG: I have a question for DLM. I thought SpiderMonkey already had a getter-setter pair on `Error.prototype`. What is the telemetry data for? + +DLM: Specifically checking for the `[[ErrorData]]` internal slot as well as making the setter require a string, and those changes seem web compatible with the data I have so far. + +SYG: What was the second one. + +DLM: The setter I believe is specified to do nothing unless the argument is a string. It seems like that would be web compatible as well. That is not what JHD was sure about. + +JHD: The current specification on the screen does not check the type of the setting, the set argument. But there’s an open issue discussing that. What the current spec does with the setter is it requires that the receiver be an error object. Personally I would love to restrict as much as possible and so I’m glad to hear that checking that being a no-op when the assigned value is non-string would be web compatible and I can update the spec text in that event. + +OMT: I just wondered if the steps there include not having stack traces, because I don’t think the spec explicitly requires – + +JHD: Yes, that’s correct. It says that represents it stack trace of E, that is hopefully walking a distance of saying if you have stacks, put them here, and if you don’t have stacks or have security reasons why you don’t want to give one, it’s cool. Implementation defined means an empty string qualifies. + +OMT: I support that for stage 2. + +JHD: At return it would be great to be in a world where stack traces are fully defined in the spec. But that is lots of work and many proposals away, I suspect. + +OMT: I agree that for define. + +JHD: I see the queue is empty. I guess first can I have consensus for Stage 1? The problem statement being specifying the currently non-standard stack accessor and mutator on error prototype? + +CDA: +1 for Stage 1 and 2 for OMT and SYG has a question. + +SYG: I have a question about this error data internal slot check. So given that V8 is nonstandard implementation is able to manifest stacks on non-error instances, on non-native error instances, if you standardize this—if we standardize this, it is still the case that V8 will have those stack accessors or own properties or whatever they are—I think they’re own accessors right now—that don’t live on error instances. + +JHD: That’s correct. Setting aside that there are folks that really want to see the own accessors go away, that accessor is a completely distinct function from this one. And capture accessor stamps on the object is unaffected by this essentially. + +SYG: That ties into MAG’s proposal, I guess there needs to be enough leeway built in then to capture stack trace that doesn’t prohibit it from stamping a stack onto non-error instances. + +JHD: I’m trying to think about that. If `Error.captureStackTrace()`, I assume for web compatibility reasons and V8 compatibility reasons, must be able to put a stack string on to any arbitrary object. And that must not be prohibited. And that otherwise sort of defeats the point of that proposal. Does that align? + +SYG: I’m just confirming that it basically needs to have that allowance for web compat. But if the thing you’re putting the stack on to with the error instance with the error data, then this getter then kicks in and then it has the semantics. I want to confirm that that is the intention. + +JHD: I think that would be an open question for `Error.captureStackTrace()`: if you do `Error.captureStackTrace()` on the error object and alter the stack trace, should that alter the internal value on the stack trace on the error object such that the accessor reads it, or should it be completely unrelated? The one slide in the `Error.captureStackTrace()` proposal that I commented on should say it’s completely unrelated and that you cannot use captureStackTrace to censor the actual stack as long as you have this getter available. It would be an alternative implementation of `Error.captureStackTrace()` that inserts into the slot on error such that the getter reveals the censored stack but I don’t think—like, that’s a cross-cutting concern. But I don’t think that’s a specific item in this proposal. It’s something that a choice within captureStackTrace will determine without any change required in this proposal. + +SYG: Right, okay. Thanks. + +MAH: I have a clarification question for SYG. You mentioned that there may be own accessor properties left by some implementations, what cases do you have in mind? + +SYG: Nothing concrete. I think if this gets standardized likely one way to go here is that the own accessors disappear, because we standardize on error instance that they be a prototype accessor, but that’s only for error. For the non-error cases, then we would have a choice to manifest those stacks as own accessors or own data properties that are magical somehow. And it would be like since I know that you, Agoric, really wants to get away from own accessors, that would be a time to try to present those as own data properties instead. There’s no concrete use case that we have for them to be own data or own accessors. But that would be a natural place to try to get away from it. + +MAH: For captureStackTrace were in favor of just defining a data property as we mentioned. There may be ways of having it an accessor if you’re really interested in a lazy evaluation of the stacks for when it gets accessed, not when it gets defined, but that’s a topic for discussion on the captureStackTrace proposal. + +SYG: We would prefer it not be—to give it more context to the behavior change for stack accessor that caused the bug for Agoric the reason it was changed is we had a bug, V8 had a bug prior to the change being an actual getter-setter pair it looked like own data property. Because it was lazy under the hood, and because we have the hateful thing of calling prepareStackTrace that is user code you have the case where you have the own data property with arbitrary side effects because it ended up calling to a possibly a user-set captureStackTrace. So to recover the invariant that data properties ought not to cause arbitrary side effects we made a smallest Delta change which was to make it into an own accessor pair. That’s how we got to an own accessor pair. I would not be in favor of having a magical data property that is lazy. That is something that we need to discuss going forward for like, what is the compatible way to do that. And if we don’t want to go back to that world and you really want a data property and that precludes some kind of laziness, does that matter for the non-error instances? Probably not. But we should talk it through. + +MAH: That was going to be my question. Does it leave this matter in that case? That is another for– + +CDA: WH is asking what the Agoric bug is. + +MM: I can clarify that. The Agoric bug is because syntax can cause the virtual machine to throw an error and because the accessors were own accessors, it was not possible to virtualize the environment, to prepare the environment, to virtualize stack access by replacing, for example, what would have otherwise been inherent accessors on the prototype that we can replace in the prelude. Now, what is worse, that by itself was not fatal. What was fatal is that all of the own accessors have the same getter-setter, which obviously means on error objects, since they’re the same getter-setter they have to reach for the internal data any way, so they would have had the same behavior had they been on the prototype. But because they were the same getter-setter, that getter-setter pair was undeniable because it could be reached by syntax. And you could then had a global communication channel through objects that had the internal `[[ErrorData]]` slot, where one compartment could get the getter and in another compartment could get the setter. If they had the common access to the object that otherwise should not have enabled them to communicate, they could communicate. + +MM: It’s worse than just that they could communicate. If the setter had restricted things to strings as one might have expected, then it would be an information leak. The getter-setter pair did not restrict it to strings at all. You could pass arbitrary values through the undeniable getter-setter pair. The whole thing is a mess. What we’re doing to be relatively safe in the face of it is unpleasant and does not restore our safety guarantees. We have the burden of explaining the lowered safety for breaching capability leaks between compartments. It’s just a mess. Does that answer your question? + +WH: Kind of. How is this a global communication channel? + +MAH: It effectively connects single global weak map instance that can use getter and setter to access the information for any objects. + +MM: So remember that the presumption is that objects that are obviously stateless and frozen—if the object obviously has no hidden state, then sharing that object should not enable communication. + +WH: Okay. So this thing lets you attach an arbitrary field to any object whether it’s frozen or not? + +MAH: Yeah. It’s the same kind of issue as private fields stamping for return override. You get to add information to an object that otherwise looks like it doesn’t carry any information. + +WH: Okay, thank you. + +MM: Thanks for prompting us to be explicit about that, because it took us a good long time to understand. It is kind of subtle. + +MM: SYG, I wanted to find cases to understand what kind of compatibility burden it might be for V8 to switch to the pair of behaviors we’re proposing to standardize here with this presentation and the previous one? First of all, for everyone interested in captureStackTrace including JSC and the proposal we just saw, I wanted to understand what the use case is that motivated compare stack trace and if we believe that the vast majority of actual usage stays within that motivating use case? So the particular motivating use case I have in mind is object that actually do inherit from `Error.prototype` but are not primitive errors, they’re just plain objects, and this basically dates from before ES6 classes, when if you wanted to create what is effectively a new category of error type, a new error type, you would emulate that by having a plain object inherit from `Error.prototype` and capture stamp the error on it. Everyone interested in `Error.captureStackTrace()`, is that everyone’s understanding of the motivating use case is? Anyone have data that actual usage deviates from that pattern? I’ll take the lack of response to mean that nobody knows, unless somebody wants to say they know something. Okay, thank you. + +MM: So the other thing is that the—for Error objects because the own accessors do have the same getter-setter and therefore must behave by accessing what is effectively an internal property for the Error object specifically, leaving aside the non-errors, I would think that moving the accessors up to error prototype should not affect the getting behavior. The setting behavior is more subtle, of course, because now rather than modifying the internal property, it would be overriding it on the instance with the data property. But it’s hard to imagine that much actual even V8 specific code would break from that. I’m wondering if you have any intuition about that and specifically SYG or V8 people, if you have any intuition about that or even better, any data pattern? + +SYG: Sorry. I think I lost the question. The question is, is there a compat worry with the setter behavior described here? + +MM: Yes. The question is both. But I’m separating it into two questions. The first question is, just moving the getter up to the prototype since it’s the same getter on all of the accessors any way, if you just inherited rather than having it be own for the error object specifically and for getting specifically, do you expect there to be any compat problems? + +SYG: We both expect and hope there to be no compat problems for the getter. + +MM: Right, okay. So now for the setter, moving it up to the prototype. The natural behavior for the setter which is what JHD is proposing is that it create an own data property on the instance that is clearly different than what V8 currently does for error objects. Do you have an expectation about what kind of incompatibility that would be for V8 users? + +SYG: Unfortunately not at this time I just don’t know. I agree there is more risk there. But I just don’t know who is doing this. + +MM: Okay. And then finally, for non-error objects, if there might be an incompatibility caused by one aspect of JHD’s spec that I would propose to relax if it actually causes pain for V8 to adopt the proposal, which is the setter if given a non-error object, even though the setter only creates an own data property, which is it could have done on any object, it checks the existence of the internal slot and rejects, I would want to keep the rejection on the getter but the setter could waive the type check and add the data property to any object and then once of course the data properties is on the object, then the normal get on the object would get the data property so the fact that the error check remains on the getter would not affect what I would expect to be normal usage. + +JHD: You’re talking about removing step 5 from the setter? + +MM: That’s correct. And since `Error.captureStackTrace` also adds the data property to any arbitrary object, there is kind of a thematic fit there, I would prefer to keep line 5. But I just wanted to offer explicitly if we can have all of the compatibility pain to V8 for the presence or absence of line 5, I would be perfectly happy to toss line 5. + +SYG: Sure. That sounds—I think if we find that the thing is not compatible, then we can’t do it. I don’t think V8 has any—to the current spec text as written, we have no objections on the intent or what is trying to do. If it turns out something is not compatible, we have to go from there. Maybe it’s simple as removing step 5, maybe it’s something else. Unfortunately I’m not sure how to figure it out without trying it and see. + +MM: That’s wonderful. I’m very happy. Thank you. + +CDA: We’re almost out of time. I’d like to get to KG’s topic, if possible. + +KG: Yeah. This is just pointing out that DOMExceptions only sometimes have stacks in some browsers, and because DOMExceptions are errors and they have the `[[ErrorData]]` internal slot and this would be a change such that if you do new DOMException in Chrome, it would have a stack. So I’m sure that’s fine. It already has a stack in Firefox and who is manually constructing DOM exceptions anyway? But pointing out this change would take place and would need tests and such. + +JHD: And we discussed this in Matrix. I’m happy to write the web platform tests during Stage 2.7 and ensure it returns the string and not checking its contents at all. If a browser that is currently returning undefined wants to return an empty string, go nuts. + +SYG: The way stacks are attached to DOMExceptions in Chrome it’s kind of done at creation time of the DOMException in C++ and depending how it’s created. I have even less of a handle on how DOMExceptions interact with this. Like, does it always then—all DOMExceptions get this, get a stack trace or does it sometimes return an empty string to begin with and then—yeah, I don’t know. We should work that out. + +JHD: And that sounds like something clearly as part of 2.7 that I need to make an HTML PR and that’s where I think we would work this stuff out; yes? + +SYG: I would, yeah, for 2.7 because I think that is a big part of this—because DOM exceptions are real errors now, yeah, it is important to figure out what the other browsers do in this case as well, and then on the HTML side, yeah, get reviews and get agreement on what we do for DOMExceptions. + +JHD: Okay. So I’ll note that as a requirement before advancing to 2.7. + +???: All right. + +JHD: And repeat my request for Stage 1. Seeing no objections, I will now request Stage 2. + +CDA: You had an earlier support for Stage 1 and 2 and +1 from Michael for Stage 2 as well as chip for 1 or 2. That’s a good amount of support. Are there any objections to Stage 2? Sounds like you have Stage 2. Reviewers. Looking for two TC39 heroes. + +JHD: I see Nicolo and anyone else want to review the stack accessor proposal? + +MM: I’m a champion and thus not a candidate. + +JHD: Michael, okay. I hear MF and NRO. + +### Conclusion + +* Consensus for Stage 2 +* MF and NRO are spec reviewers +* HTML integration PR must be directionally approved, and possibly merged, prior to stage 2.7 (and certainly prior to stage 3) + +## Intl Locale Info API Update in Stage 3 + +Presenter: Shane Carr (SFC) + +* [proposal](https://github.com/tc39/proposal-intl-locale-info) +* [slides](https://docs.google.com/presentation/d/14ColNEWDFlAnPGW6GSPSk6gbcdTmSy4pYuYXOwDlZX8) + +SFC: I’m presenting this on behalf of my colleague FYT who is out, who is feeling under the weather this week and asked me if I could present these slides on his behalf, so I’m happy to do that. So I’ll go ahead and walk through. + +SFC: I’m first going to give you a refresh of what is this proposal and then we’re going to seek consensus on normative PR number 83. First of all, this -- what is the expose locale information proposal. It’s a proposal to add additional information to the locale object based purely on CLDR to present information that’s derived from CLDR regarding locale specific preferences. This is not the same as the user preferences proposal that I know had has been, you know, raised concerns previously. This is just purely deriving information from CLDR based on a locale that had been offered into the API. It includes week data, which is one of the main motivators for this, which allows users and developers to do things like create a calendar widget, creating correct -- including appropriate information for the first day of the week and the weekend days. + +SFC: So here is the history of the proposal. It’s been coming for a while. It started at Stage 1 in 2020 and advanced to Stage 2 and then Stage 3 in 2021 fairly quickly. It’s been at Stage 3 for a while. Unfortunately, it’s gotten held up a couple of times with some of these fundamental issues that ideally we could have found earlier in the process, but we found them. The biggest change was getters to functions. I think previously we had, like, a getter on the locale that was, like, dot first day of week and that would actually run code, and then a decision that was made by the -- in Temporal, we adopted in this proposal was, no, we would actually have getters that start with the word “get” and have them be functions. That was a pretty big change we made to the proposal at Stage 3. And then we had questions about, well, how with does it interact with numbers and strings and things like that. So it’s not necessarily a great example of what should be happening at Stage 3, but it is what it is. + +SFC: And here the latest change we want to make at Stage 3 to this proposal. So I’ve been doing some research about week numbers. I think some people in this room have been subjects of this research. I’ve also done research amongst other researchers trying to my best to get non-techy users to give numbers about week numbers, and pace. Basically what I found pretty consistently is I have yet to find any regular person who has a specific expectation for how week numbering should happen in any system other than the ISO week numbering system. So currently, the proposal forwards data from CLDR about an outer an used to determine week numbers based on the first day of the week, and that would result in different week numbers in America versus Europe versus Asia versus Middle East. The only thing I have found evidence for is that there have been users -- people in Europe, including in this room, talk about week numbers, what is week 15 or something, of the year, and that number is derived from a formula that ISO-861 specifies, and when that user switches their locale, if they switch it from, like, Europe locale to in US, all of a sudden the week numbers are off by one, and that is quite confusing and sometimes misleading to users and actually causes real bugs. And that seems more compelling than, you know, the lack of any user that I’ve been able to find who has a different expectation for what the week numbering should be. + +SFC: So the proposal that is -- that has been adapted by CLDR which we’d like to forward the ECMA 402 proposal is that the week numbers will be always derived using the ISO 8601 algorithm, which is you look at the first week of the year that has a Thursday in it, and that is week 1 of the year. And that will be the same algorithm regardless of the first day of the week, regardless of your -- yeah. Regardless of the week, it would always be Thursday, and it determines the first day of the year, and as a result that involves removing the minimalDays getter, because that was the thing used to differentiate week numbering from locale by locale. So we no longer need minimal days, and that’s what issue 86 is for. And FYT listed some remaining open issues and there are some things that ABL filed. Anba is the champion that implements things in Firefox and those are totally resolved. This is not ready for Stage 4 yet, but this is the last major, like, actual shape change that I expect the see. There will probably be more. And I say that now and I’ll probably come back next meeting request yet another one. We’re getting closer and closer each time we do this, each time reredouse scope to finish this proposal. + +SFC: Implementation status is it’s shipped, which means that as part of adopting this change, browsers who already flipped this minimal days getter will have to stop shipping the minimal days getter. I guess they could keep shipping it, but it’s no longer part of the standard, and probably they should not be shipping it anymore. And there’s also polyfill. So the request is for consensus on the PR99, I’m also having to open it up. We discussed this at TG2 twice, both at the January meeting and the February meeting. And there was pretty strong consensus amongst the members of the TG2 that we wanted to move forward with this. You can see the change here. It’s all of the leading things. You’re deleting a bunch of lines and text. Everything is deletions. So that’s the pull request. And happy to entertain anyone who is in the queue. + +DLM: First of all, support the normative change here. I just wanted to reiterate the importance of the lack of fallback behavior, which is issue 76 you mentioned. This is blocking our implementation. We see it as pretty important for interoperability and between implementations and we’d really like to see this issue resolved as well as the other issues before this comes back for stage advancement. + +SFC: Definitely noted. + +CDA: There is nothing else on the queue. + +SFC: Is there no one here who wants to nitpick about week numbers, or can I just ask for consensus to move forward with PR 99? I have a hand. Can I go ahead and put MM on the queue? + +MM: Would the relevant part of the CLDR tables be called “WeekMap”? + +SFC: I love it. I love it. I love it. I think it’s called week data, but I love WeekMap. That’s a much better name. Thank you very much for that. + +CDA: Still nothing on the queue. + +SFC: Okay. I’ll ask more consensus one more time for PR 99. I’m going to say that we have consensus for PR 99. + +CDA: Okay. Do we have any objections? Sounds like you’re good. + +SFC: Thank you. I think that’s all I had for today, so we get some time back for the timebox. + +### Conclusion + +* Reached consensus on PR 99 +* Need to resolve remaining open issues such as issue 76 before the proposal can advance + +## Stabilize integrity traits status update + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-stabilize) +* [slides](https://github.com/tc39/proposal-stabilize/blob/main/stabilize-talks/stabilize-stage1-status-update.pdf) + +MM: So this is a status update for stabilize, and I added the subtitle “hopes and dreams” because the nature of the status update is where we hope we can take this proposal, this set of issues with this -- what this is about, but we don’t know yet if it’s possible. So I just wanted to explain where we’d like to go and hopefully get feedback from people, both here, both about whether it is possible and if it is, whether this direction is attractive, and how people feel about this direction. + +MM: So integrity traits are not something that everyone knows well, so a little bit of recap. We’ve got right now three integrity traits in the language, usually referred to as integrity levels because they’re, you know, linear hierarchy, frozen, sealed and non-ex-extensible, on the left we have the verbs, in the middle of we have the states, and on the right we have the predicates. And the thing that I’m taking as the defining characteristics of something being an integrity trait is that it’s a monotonic one-way switch, once frozen, always frozen. It’s stronger object in variance. When an object is frozen, you have more guarantees that enable you to do higher integrity programming with less effort. It implicitly punches through proxies, or rather the integrity trait status is transparent through proxies. If the target has integrity trait X, then the proxy does and vice versa. If the target is frozen, the proxy is frozen. If you try to freeze a proxy and the proxy allows the operation, then both proxy and the target become frozen. + +MM: Okay, in addition, there’s the crucial distinction between explicit versus emergent. There has to be two proxy traps per explicit integrity trait, so there’s a prevent extensions and an is extensible proxy trap. There is no freezer seal proxy trap, because those are simply a pattern of other guarantees that either hold or do not. They’re implied. + +MM: So without going through all the detail I went through last time, I went through this taxonomy of all of the separate atomic, unbundled integrity traits, each of which address some particular problem that can be addressed by integrity traits and that we believe are motivated. And the important thing about this taxonomy in terms of the parity of the bundled thing is it allows to us go through them and see what we’re talking about, what each of these useful guarantees each of these provide. + +MM: So fixed mitigates the return override mistake, the use of the mechanism and classes to stamp objects with private properties. If the object has been made fixed, if it has the integrity of a fix, then the idea would be that the use of the subclass constructor stamp the private property on it would instead be rejected with an error. And this is, in particular, motivated at Agoric for virtualization purposes, and there’s a lot I can about that if people are interested, and it’s in the shared structs working group, because they want a fixed shape implementation of structs and that conflicts with the way V8 implements the stamping of private property, private fields, so fixed would address -- they would make all structs fixed to -- so they could all benefit from a fixed shape implementation. + +MM: So after the last presentation, we got this issue filed by Shu, which is V8 prefers normatively changing non-extensible to imply fixed, and we are owe -- all the champions are overjoyed with this idea. This would separate it out from this proposal from new integrity traits that would simply bundle it into non-extensible. V8, to my understanding, is already doing measurements to find out if it’s feasible and already getting some small number of negative results. I don’t -- we’re hoping that the judgment is that we can still do that. Shu, let me just break process and ask you. Do you have any information updates from the V8 measurements about whether you still hold out hope that we can do that? + +SYG: I pasted the link in Matrix to the use counter. I can’t believe it is not zero. It is a `e-7` or something. So it is still more than I would like, but we don’t really have a hard rule for, like, how small something has to be. I think this is few enough axes that it would be worth trying still. Pending, you know, further data. Unfortunately if you look at the slope of the graph, it is not flattened out yet. But this might be an artifact of just how the visualization is and how we’re getting data, because it’s been, like, a little bit more than -- a little bit less than a month since this hit stable, so we’re still getting more data as it hits a bigger population. + +MM: Okay, thank you. That’s very clarifying, and it means that there’s still hope, which is the most I was hoping for at this stage. + +MM: Okay. Next is overridable to mitigate the return override mistake, which is well illustrated -- I’m sorry, to mitigate the assignment override mistake, which is well illustrated by this sample code if some prior piece of code freezes object prototype, then there are many, many old libraries, specially those written before classes, that do things like use a function to create what is effectively a class point and then assign to the toString property a prototype in order to override the string method if the prototype has been naively frozen, then this will throw. This has turned out to be the biggest deterrent for high integrity programming in JavaScript. The biggest deterrent for freezing all of the intrinsics. If there were an overridable integrity trait, then by making the prototype objects in particular, but all of the primordial intrinsics and others overridable, then the deterrent would go away and these assignments would work. And there’s been controversy about whether to call this the override -- the -- a mistake, and I just want to point out that in all of the years that we’ve been going around this, we have found code that breaks if this is fixed globally for the language, which I’ll get into in a moment. But that is for an accidental reason. We have never encountered code that makes use of this aspect of the language on purpose. Let that sink in. + +MM: So after the last presentation, we got this issue filed by Justin Ridgewell clarifying what the history was on the prior attempt. And there was only one breakage observed, and it I was very narrow. It was an in old version of the lodash library, and even though it’s already fixed in modern lodash, we can never erase old versions from the web. And it had to do with the toString and toStringTag I fixed specifically on TypedArrays, even though it could apply in theory to other two strings that depend on the two string tag behavior, among the primitives. And, JRL, if you’re within earshot, please correct anything I’m getting wrong here. But what JRL is proposal is it’s still feasible to fix this globally for the language by having the global fix make a cutout specifically for the two string and two string tag properties that cause the old version of lodash to go astray. And the cutouts that JRL is proposal and the similar, but somewhat different cutout that RGN has proposed, both of which have all the safety properties we need. It’s just a little bit of ugliness, but it would let us fix this globally in the language. We would love that. We would be overjoyed if that could happen rather than addressing it through an integrity trait. + +MM: Then there’s the non-traffic, which is another re-entrancy hazard problem. This is re-entrancy through proxies. When you do -- when you have a proxy that looks like a plain data object and survives all the tests that you might think to apply to it, as to whether is a plain data object, it might still be a proxy, but does things synchronously during the handler traps, so you would like to be able to write code like this. And I want to recall our records and tuples, which is a presentation that is coming up later this meeting. Records -- something that you could test whether something was a record or a tuple. If it is a record or a tuple, you knew it -- it had no behavior, it was a plain data object, it could not be a proxy, so it would be very nice to, you know, sort of -- so that we could have tested that in an early validation check, such that once you’ve passed the input validation check, you can use the validated to be plain data objects inside your function while invariants are suspended, knowing that you’re not going to be turning over control synchronously to any foreign code. We don’t have records and tuples, so we’d like to be able to create a predicate that we call record like. But because you can’t write a predicate today that will verify something is not a proxy, you can’t actually write a predicate that protects us from re-entrancy. + +MM: The idea is that by applying the new integrity trait, stabilize or non-trapping, and then having the record-like predicate check that something is stable, then that verifies that even if it is a proxy, that proxy will never trap to its handler. And, there by, even if it’s a proxy, you are safe against re-entrance had arts, and we’ve done that safe by making observable whether it’s a proxy or not, so this approach to the reentrancy hazard of handler traps does not violate proxy transparency. It just makes the existence of proxies that claim to be stable harmless because we now have a guarantee that they cannot re-enter. + +MM: At Agoric, we actually have a shim, I believe it’s a fairly complete shim, for non-integrity trapping trait for itself. It was kind of a surprise once we thought about it that was possible to shim this faithfully and safely within the language, but it has one of these big have-to-run first burdens, which is it does it by replacing the global proxy constructor, and the only -- it only has the safety properties it claims, if it has replaced it globally and the adversary cannot recover a normal proxy constructor by other means, such as by creating another realm. So it’s quite burdensome to maintain the safety. But it is possible to shim it, and not only have we shimmed it, we now have a bunch of code that makes use of the shim as it was intended, you know, for the safety that we intend, and it’s been an interesting learning experience to see how to use it for safety we intend and how much disruption there is to other code that’s concerned with these properties. For code that is not concerned with this safety property, there should be no burden at all, because nothing otherwise is no compatibility break. + +MM: Okay. Finally, there’s the unbundling of non-extensible into permanent inheritance, which both the browser window proxy has without being non-extensible, and object prototype has without being non-extensible is through magic, they refuse to have their prototype be changed. And then with that taking care of by one side of the unbundling, the remaining side from the extensible would be no new property, so you can imagine the separate -- two separate explicit integrity traits such that prevent extensions or non extensible becomes emergent from those new explicit ones. + +MM: Okay, so having recapped all of that, what we’re hoping for is, first, all of the champions of stabilize and SYG—and SYG, I know you’re on the call, so please correct me if I’m mischaracterizing anything—we all are of the opinion that even though there is a nice orthogonality from a purist point of view in unbundling non-extensible and it allows us to retroactively rationalize this behavior of window proxy and object prototype, it’s just not worth it practically. So we hope not to unbundle non-extensible, leave those two properties bundled into non-extensible, non-extensible goes back into being explicit, and the result is that we cannot faithfully emulate the browser global object or the object prototype object, and we’re willing to sacrifice that faithful emulation even though it’s a compromise of virtualization, because practically, nobody will care. + +MM: So first of all, Shu, did I characterize what we agree on there correctly? + +SYG: Yes. We also prefer that non-extensibility remain bundled. + +MM: Right. Thank you. And the next one is the one that we’ve already mentioned that SYG expresses in that filed issue and that we talked about that -- with the current usage counters we still hope to do, which is also to bundle fixed into non-extensible, so there no separate fixed integrity trait. Then overridable, we’re hoping that JRL‘s strategy with either JRL’s carve out or RGN’s alternate carveout, we’re hoping that will enable us to go back and fix it globally for the language without breaking the one case question we know about in lodash, so that goes away. Now, all we’re left with is non-trapping, so non-trapping would end up just getting bundled into stable, and now stable becomes explicit. So this is the picture we’re hoping for, having taken care of the others by either choosing not to unbundle or by dealing with them by other means. + +MM: And I want to acknowledge a political reality that is just there, which is somewhat unpleasant for us to realize as the champions of the proposal, if our hoped-for resolution happens, then stable addressing the re-entrancy is addressing something much narrower than the original overall stabilize proposal. And, therefore, there’s less wind in its sails. We understand that and we understand it’s more than an uphill climb to advance and get consensus through the stages, but it’s the right thing to do, so we’re taking that hit and hoping the others can move forward by themselves. + +MM: I want to take a moment for a little bit of historic context. In 2010, 15 years ago, BE did this presentation with input from myself and Paul, and proxies are awesome, in which he presented what were our plans for the time for a non-trapping behavior of proxies. So just flipping through the slides for a final transition, so this final transition going from trapping to fix, notice that in the fixed state, the handler in the blue circle above, gets dropped because the proxy will no longer trap to the handler, so there’s not even any reason to continue to hold on to it. So the current non-trapping is very much in line with our original intention here, although certainly many, many of the details have changed over the 15 years since we first talked about non-trapping. And then one reason I brought that up is that if we get our hoped for picture, then we could also go back to the original name and call the non-trapping integrity trait fix, because it’s nice and short and something that’s fixed is not broken. + +MM: And at this point, I will take questions. And I’m sorry, at this point I will stop recording, and RPR, please stop recording as well. + +CDA: Okay, Justin. + +JRL: Can you go back to the slide that has my comments about integrity state, the override mistake. + +MM: Yes. + +JRL: Yes, this one. So to clarify something you said during your presentation, you mentioned both toStringTag and toString. + +MM: Yeah. + +JRL: So this carveout I’m trying to highlight here doesn’t require any changes to toStringTag. It only requires a change to two string, and it requires the change to create new data properties if we’re overriding. The change to toString here is specifically to support an old version of lodash that checks for these explicit fields -- classes, and if the toString method returns the appropriate result, then it will not use a bad implementation of toString that it has directly written into this old code base. If we patch `toString`, that means it will continue to use a good version of `toStringTag`, and that will hopefully fix everything. So the two changes we need here are, one, if you do the override misfact, it creates a brand new data property with configurable true on your own object, and then it also -- when lodash specifically tries to do this, it will then check to see if it should use the good `toStringTag` and/or the bad `toStringTag` and does this by checking `Object.prototype.toString`. And if we can trick lodash into doing the good thing, hopefully everything is fixed. + +MM: Great, thank you very, very much for that clarification. I’m very glad you were present for all this and that we’re able to clarify. Is the -- are the classes in question that lodash actually trips over, all of the questions -- the classes mentioned on this slide are only the TypedArrays? + +JRL: Sorry, the -- what about TypedArrays? + +MM: So my impression was that lodash was only tripping actually on this -- the -- on this issue with regard to TypedArrays, and even though it applies in theory to an observed behavioral change for any class for which, you know, any intrinsic class for which the two string behavior is sensitive to `toStringTag`. + +JRL: Yes, this is the other fun part. There are lots of methods that use the implementation of broken toStringTag, but there’s only two that are broken, is TypedArray and is ArrayBuffer. The thing here that if we trick lodash into using the correct toStringTag implementation, then it will fix TypedArray, is TypedArray and is ArrayBuffer. But it’s not actually anything to do with the typed -- the data values that are returned by toString when it’s called against a TypedArray or actually is an ArrayBuffer. But the classes that I highlight here, it checks each one of these to make sure that it returns the correct data view ArrayBuffer when called against a data view class or the ArrayBuffer class instance that. + +MM: Great. And, all right, let me just get your opinion. Are you in favor of us doing this exactly as you lay out, globally for the language? + +JRL: Yes. I think the second carve out here is totally appropriate as a back come pat thing we can just do, and the first change here is implementing for override mistake, and I think those are both appropriate. + +CDA: All right. I just want to note we have less than one minute technically on this topic, and there’s a big queue. First up, DE. + +DE: Do we to extend the timebox by 20 minutes. + +MM: I’d appreciate extending the timebox, on the other hand, I’m not asking for stage advancement, and I don’t want to crowd out things that are asking for stage advancement. We could continue this late for that’s more appropriate. + +DE: How is scheduling going? + +CDA: So the issue is, it’s not really an issue, so we have time on paper. The issue is that we are now full through the end of today. And we were full -- actually, no, we have 45 minutes available tomorrow before lunch. So, yes, never mind. 20 minutes? + +MM: That would be great. + +CDA: Okay, let’s just go to the top of the hour and then we’ll do the mid-afternoon break. Does that sound good? + +MM: Sounds good to me. + +CDA: All right, DE. + +DE: Okay. How do we want to, you know, check if this is web compatible and roll it out in browsers? I think we’ve had enough kind of failed experiments where we ask browsers to just ship something that I don’t know if we have that kind of interest in this one. But I would also be interested in hearing from browsers. I’m not really sure how to phrase this as a use counter or something. + +MM: Yeah, that’s a great question, because if the browser pays the cost, and I want to acknowledge, the costs are substantial to do one of these use counter exercises. If no browser is willing to pay that cost and try and experiment, the rest of us are helpless to advance this, because we simply can’t advance it without that data. + +DE: Do browsers have any thoughts here? + +SYG: I’m staring at the thing. I’m not sure how to write a use counter to test whether the change will be compat or not. What would you test? + +NRO: The counter in object protector string that checks for this these type of classes listed here. You have a custom symbol to string installed. + +JRL: Sorry, I’m going to butt in here because I remember the details from the lodash thing. If we -- oh, my God, now I don’t even remember. The error -- I think Mozilla implemented an error tracker that tried to see if there was a change. + +DE: Oh, Mozilla did some work in this area? + +JRL: Yeah, for the original issue, but this was six years ago. Someone implemented a use case tracker. We found the pages because of this, what triggered it. This Oberle throws an error in the page that was broken, and I don’t know how Mozilla did that. + +SYG: If we can counter it to a telemetry, if it’s not in a counter, if it’s an `Object.prototype.toString`, that’s probably fine. + +MM: SYG, I want to express my deep appreciation for your willingness to do that. Thank you. + +KG: I just wanted to express support for fixing the override mistake if we can possibly can. That would make a lot of things much better. And I wanted to hear from browsers if they had any interest in this, because it sounds like they’re at least willing to explore some of these changes. I don’t know, so the toString change would be a separate change from, like, outright fixing the overhead mistake. + +SYG: Like, wait, so my understanding is -- number 2 there, the only reason that exists is to work around the biggest known user of the override, the biggest known, like, dependency on the override mistake so we could do number 1, which is fixing the override mistake, which should be compatible because it changes a throwing behavior to a non-throwing behavior. + +KG: That’s right. + +SYG: That correct? Okay? + +KG: Yeah. So if we have some appetite for that, I would be very excited. There’s lots of things that would be better if we can do that. + +NRO: Yes, I’m really pleased to see how the proposal is becoming smaller and smaller, so excellent, Mark and -- + +MM: I’m sorry, I couldn’t quite hear. + +NRO: I’m really happy with how the proposal is becoming simple and simpler. And the first time you presented it, it was difficult to keep track of the parts, so this is a very welcome change. + +MM: You’re welcome. I appreciate it. + +KG: This is with regards to your point about the sort of political reality of it being less motivated. We’re doing less stuff. I think the non-trapping check is sufficiently motivated on its own. Lots of code needs to care about is there any possibility of this triggering user code. That’s one of the main things you need to care about. And being able to actually assure that I think is a sufficient goal on its own for our proposal. + +MM: Thank you. + +WH: “Fixed” prevents you from attaching extra properties to objects, but if an object is constructible, then you can subclass it and do it that way. Have you looked at anything which makes objects non-subclassable? + +MM: The existing precedent for this is that the browser windowProxy is effectively fixed as a special case, and it does not do it by preventing subclassing. It does it by rejecting the addition of private fields. So our inclination, even though we can’t retroactively rationalize the browser windowProxy because of the unbundling of prevent extensions, our inclination is still to follow the precedent of windowProxy just for uniformity and the fact that it’s adequate for what we need. The first thing I thought of was actually along those lines, to prevent the return override from returning a fixed object. But I think the precedent is actually better place to put the error check anyway. + +WH: I’m not talking about the return override. I’m talking about just defining regular subclasses. + +MM: Oh. No, this is -- that’s not something I’ve ever thought about. That’s a new -- please talk that through, how this -- + +WH: I’m just curious if you’ve explored ways of creating objects with a fixed shape, which cannot be subclassed. + +MM: No, I have not. + +SYG: If I can interject here, Waldemar, the two kind of -- so the way the structs proposal deal with that is something that’s the -- that we’re calling “one shot initialization”, where you can declare a struct to be a subclass of another struct, and when the instance is created, it immediately gets all the declared fields before it ever escapes back to user code. There is no way to observe, yeah, the intermediate state. So it’s unclear, like, that is an integrity trait or level that can be on an object. Like, that feels more like of the class or the struct declaration than on instances. + +WH: Yeah, clearly subclassing is less harmful than the return override because, when you construct it, you know that you’re constructing the derived class. But I’m just wondering if there was any exploration of making constructible objects final so that they cannot be subclassed at all. + +MM: Yeah, I’ve never thought about that. + +WH: Okay. + +DE: I’m just trying to understand what this feature is concretely. You’re saying this is stronger than frozen, and in particular, a frozen object might be a proxy that although everything is there in its target, it still has some side effecting code when traps are hit. + +MM: That’s right. + +DE: And this fixed something is just like a plain old data object and you have an API of taking an object and getting out a version of it or modifying it in place such that it is in this fixed mode, or what is the a actual API? + +MM: So fixed would imply frozen, so it has to be at least frozen for it to be fixed. The addition of it being fixed is that essentially a proxy on a fixed object is itself fixed, and the behavior of the proxy is identical to its target in all ways except for -- that it has a distinct object identity. + +DE: Okay, so when you have a frozen plain old object, it just -- it already is fixed or it isn’t? + +MM: No, it’s too late to change the behavior of frozen. + +DE: How do you create a fixed object? + +MM: It would be by adding new -- just like we have the existing verbs, object and reflect -- object freeze seal and prevent extensions, there would be an object fix or whatever the word is, stabilize, that would be a new verb. Because it’s a new explicit integrity trait, there would also be a `Reflect.fix`, just like there is currently a `Reflect.preventExtensions`, and the result would be that it would cause the object to be frozen. In other words, because it implies freezing, it would first try to do all the freezing. If that succeeds, then it would additionally tag the object as being fixed. And then the proxy implementation, when it sees that its target is fixed, it bypasses the handler and simply does the default behavior as if that handler trap had been omitted, directly applied to the target. + +DE: Okay, so `Object.fix` on a proxy will then just forward to the target and there will be no proxy trap? + +MM: If you do the verb on a proxy, to -- that has a target that is not yet fixed, then the way to understand it is by analogy to what happens if you do a `Reflect.preventExtension` onto a proxy whose target is not yet non-extensible. It will trap to the -- in the case of prevent extensions, it will trap to the handlers prevent extensions file, and the -- that prevent extensions trap can throw refusing to make the object non-extensible. + +MM: Likewise, the fixed -- if you do a fix operation on a proxy whose target is not yet fixed, then it traps to the fix trap on the handler, which would be a new trap, and there’s a subtlety there. I forget to mention the subtlety. The subtlety is that if you omit the prevent extensions trap from a handler, then the default behavior is to do the prevent extensions, not to refuse to do it. Because we’re introducing this to a language that has a lot of install base prior to this feature, the way to do this, which was anticipated in discussion and then turned out to be a big deal in actual use that we found, is that if you omit the trap from the handler option, the default behavior is to refuse rather than to proceed. + +DE: Sorry, you’re talking about the fixed trap or the prevent extension? + +MM: I’m talking about the fixed trap, so the fixed trap would have the opposite sense in terms of how it defaults to prevent extension, but by providing the trap explicitly, you can get it to either accept or refuse explicitly. + +DE: Do you have a use case in mind where a proxy would want to refuse to be fixed? + +MM: Yes, yes. So the -- the big one is the legacy, which like I said is something we immediately encountered when we started to use this in practice even on our own code in places we didn’t anticipate. There turned out to be a lot of use of proxies for which the proxy was simply implementing trap behavior for purposes of doing essentially a little behavioral DSL, and it didn’t actually care what the target is at all. And if you -- and for those proxies, the target was just an empty plain object. It could be frozen or not, doesn’t matter. So if somebody freezes the proxy or freezes the object because the handler doesn’t care, the handler only has specialized traps, usually for property lookup or method indication, like I said, for a little behavioral DSL, but if you expose that proxy and then somebody fixes it, if the default behavior for old code is that the proxy gets fixed when you try to fix it, then the handler behavior that implements the DSL is turned off, so you cannot share among mutually suspicious parties a proxy that implements that DSL behavior without doing something weird to protect -- basically without adding an explicit trap handler to refuse to be fixed. And old code doesn’t know to refuse. + +DE: So I’m wondering, what if instead of a trap, it were just like a Boolean? Like, if you put in the options bag fixable true, then it becomes existing -- it adopts fixable behavior where it fix it recursively. If you don’t, it will refuse. Do we need more behavior aside from refusing and not refusing? + +MM: So it’s -- I mean, with regard to what we actually need, I don’t know. We certainly have not done -- have not coupled -- we have not written traps to do anything other than proceed or refuse. So I don’t know what we need. But the asymmetry, the asymmetry and non-uniformity, if you do provide an explicit trap with regard to the prevent extensions trap, I’d rather follow as much of the precedence as we can, but still acknowledge that having the opposite defaulting behavior to deal with legacy is just -- I think that’s sort of the minimal non-uniformity that takes care of the issue. + +DE: Okay, so, yeah, overall, this proposal makes sense. It seems reasonable. I like the idea that separating out the, you know -- override the state fix part and separating out the frozen objects always refusing private fields part, those are -- if we find a way to do those, that’s great. And then this can service the very small kind of it’s really a plain old data fixed frozen object permitted, which is -- yeah. + +SYG: I want to recap my understanding of next steps here. So my understanding is that the kind of the linchpin is partially this slide is the web compat question of can we fix the override mistake globally in the language. But I want to recount the three open compat questions and please correct me if there are more. Number one, can we change non-extensible to include no private field sampling? This is in progress, we have a counter point to yes. Number 2, can we change toString to work around that old lodash version? This is unclear. I hope somebody takes an action item to try to craft a use counter and communicate that to me and we can maybe check. That’s number 2 here on this slide. Number 3 is something we’ve been discussing in the Matrix, to actually change the behavior of the over ride mistake, it’s changing from a throwing to a non-throwing behavior in strict mode only in sloppy mode, it will be changing a silent no-op to a different behavior. That is much more risky and I’m not sure how to craft a use counter for it at all. Assignment to a property that has a non-writable same-named thing up its prototype chain, it doesn’t tell me whether the application is broken or not. It’s a silent no op today. I have no idea if you change to respect that assignment, if the tpage changes. That seems just extremely hard, if not impossible to figure out without shipping. Even if number 2 pans out, how do we hope to change the sloppy mode behavior and if that is a deal breaker for the current taxonomy that MM has—that is nice is simple? + +SYG: Is it a deal breaker? + +DE: I have a clarification first. You were saying number 3, you would be able to figure it, but my expectation would be that you’re going to get hits on it. + +SYG: That's what I am saying. I will get hits, but whether the hits and the page breaks. That’s what you want + +DE: Is there a way of—or which kind—the name of the property that is—and in this case, whether we trigger this case on symbol toString type, or on some other cases? + +SYG: They would have to be hard coded. And even then, I am not sure. Because that’s like an assignment path, that is usually hot. But to—the short answer, the easy answer is basically no. Like, you can imagine use counters to be a single bit. We don’t track any other information the the URL information you see on the public site are cross-referenced with archive to see the have the use counter. It does not include any additional information about anything. Like you can’t attach any other information. It’s just a bit. + +MM: So with regard to your question about is it a deal breaker? The—for everyone invested in HardenedJS, Agoric making it all the other companies using it, one of the restrictions of the HardenedJS is only strict code. One of the things we do in the shim on initialization, and that excess does by other means is simply completely throw out sloppy mode. For doing something very bad and weird for sloppy mode is not, it’s extremely weird. It would—it’s hard to imagine what the corresponding reflect of that set behavior would be to—because `Reflect.set` wonders whether it’s called by strict or sloppy code. The—I would find it very, very bizarre, but if that’s the price of fixing the assignment overwrite mistake globally for the language, for strict code, I would pay the price. I can’t speak for everybody else. + +SYG: Yeah. You’re not asking for stage advancement but this is something to get consensus on, because I think it’s—it’s pretty hard—the taller task. It’s a bigger ask for the browsers to check if it’s compatible for sloppy mode because I don’t know a way to build assurances ahead of time. Perhaps one of the other folks from the other browsers can have an idea. That’s my biggest worry right now + +MM: Justin, if you are still on the call, do you have any thoughts with regard to Shu’s question? + +JRL: I do not know how to craft this so we can automatically detect it + +MM: Okay. + +JHD: Yeah. So the answer to these questions should probably be given off-line but things that has occurred to me, if you stabilize a promise, that is pending can it ever resolve? If not, when it finally attempts to resolve and reject, where does the error go + +MM: Fixed does not mean by itself that it is safe from re-entrancy. Or that it does not have mutable internal slots. The key thing on the code that protected from re-entrancy, the object was stabilized or fixed, and then you apply an isRecordLike predicate to it where the is record, and make is exit. Is record-like predicate us at anytime rates the properties and make sure all of them are data properties. And checks that the object inherits from `Object.prototype`. And checks of course that the object is fixed, which implies frozen. So if all of those are true, at that point, the object seems to be a plain data object that is safe from—[bau] those additional checks are needed. + +JHD: Okay. And then the last thing I had on there, regular expressions, which I believe will throw if you try to do anything because everything tries to set last index. And possibly a lot of link—array operations which try to set links. This may already be a problem for freezing. But it’s probably a nice thing to do to audit all the built-ins and confirm that the results of stabilizing or fisting them or whatever are expected. + +MM: Okay. I will take that under advisement. I don’t have an immediate reaction. + +JHD: Thank you. + +### Speaker's Summary of Key Points + +* We agree that unbundling non-extensible into more primitive integrity traits is not worth the cost. +* We hope that the “fixed” integrity trait can be bundled into non-extensible. Google has usage counters going which should help us decide if we can. +* We hope that the “overridable” integrity trait is not needed and we can still fix the override mistake globally for the language, with safe narrow carve-outs for the legacy lowdash case. Would need a major browser to measure to decide. Google may do so (yay!) +* These usage counters may only be able to measure strict mode behavior. If so, we agree we could make this fix only to strict mode, leaving sloppy unrepaired. +* If all this turns out as we hope, we’d only have the “non-trapping” integrity trait left, to be bundled into the root trait, currently called “stabilize”. +* With “fixed” no longer taken, we could rename “stabilize” to “fixed”, which was its original name circa 2010. Though we should not spend energy bikeshedding until we know if this is even possible. + +### Conclusion + +* We do not unbundle non-extensible, even though that means a loss of virtualizable (in a corner case no one will care about). +* Only a major browser can measure whether we can bundle “fixed” into “non-extensible”. Google already has usage counters going to help us decide (yay!). +* Only a major browser can measure whether we can fix the override mistake globally for the language with carve-outs for the narrow exceptions we find (only legacy lowdash so far). Google may do so (yay!). +* If all this works out, we only have “non-trapping”, which becomes the root trait whose name is TBD (“stabilize”? “fixed”?). +* “non-trapping” would address a major source of reentrancy hazards via proxies, without threatening proxy transparency! + +## Records and Tuples future directions + +Presenter: Ashley Claymore (ACE) + +* [proposal](https://github.com/tc39/proposal-record-tuple) +* [slides](https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/) + +ACE: Hi, everyone. It’s been a while since we talked about Records and Tuples. Almost a year. I thought it would be a good one first to chat about it again. So it’s technically a Stage 2 proposal, but let’s not worry about that too much. NRO noticed that whenever a TC39 proposal makes it on to hacker news, it’s almost inevitable that someone in the comments will be asking about the status of records and tuples. And also, just on the actual repo itself, people keep asking "what is the status of this?". And I am not sure what that status is exactly. So I am going to present some ideas today, I am not proposing this for Stage 3 today. I am more trying to just encourage discussion. Especially from people—from every one of course, but I would love to encourage new voices in this area as well. Because we have been talking about this for four or five years now. And new voices are always very, very welcome. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g334e668a325_0_0] + +ACE: So to catch people up, that haven’t been following kind of all the things over the years, for many years, the proposal was all around adding new primitives. And they had === semantics and typeof that included comparisons here where there is special handing for negative zero. And then we thought we were kind getting ready for Stage 3, we were kind of only changing little bits of the proposal. These fundamentals things had not changed for a long time. And these fundamentals actually turned out, there was not appetite to do them. It’s just—for various reasons. Since then we have been back to the drawing board on "what can we do here?". Because I am convinced we can do something. I think this is something that is lacking in the language. And I am sure there is something, that we can do. I just don’t know what it is yet. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2af82517ce6_0_26] + +ACE: So a thing I like, so I went off out in the British desert and meditated, if was up to me, like a special birthday present, ACE you get one freeway design and put it to Stage 4.7, what would I choose? + +ACE: I would choose that there is syntax. I will comment in more detail. I like the syntax. Maybe it doesn’t have to be this sign. I really think we are running out of ASCII characters to be many other things. Initially, it was a bit of a blow, when we were told we can’t have new primitives, but I am come around to that. PHE said putting the implementation complexity aside, as a JavaScript user to would be confusing to have values that look like objects and arrays but are not, I have come around to that, yes, these things should be objects. I think they should be general containers that can contain anything. And I think it’s a great opportunity that they also work as a composite key. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2af82517ce6_0_39] + +ACE: So syntax. So yeah. There could be no syntax. To say if there is syntax was just about freezing objects, which is there is like set proposal for syntax that is just about freezing and sealing. Then I think there are—the advantages for lots of different actors here, I think reading it, at least, I don’t know how people’s brains work. But when I am looking at this, just the view of parentheses, the noise around the data makes it much more readable. It’s fairly less characters to type. But I think it’s beneficial for the reader, there’s more weight on the reader and the writer. And a tooling team, I like this from a tooling perspective. It’s much easier to just analyze this code and see this is a frozen object. It’s not going to change. As opposed to tracking. If `Object.freeze` is being used, tools, I think Rollup, special cases, calls to `Object.freeze`. If there’s enough indirection, that breaks down, creating a `deepFreeze` utility static analysis tools can’t pierce through that to realize what is happening. Syntax gives you the guarantee that no monkey patching can happen. There’s another advantage, which kind of only makes sense later on. I think there is potentially some kind after runtime advantage. But that only makes sense once I get to the later slides. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_0] + +ACE: An example of what I was just saying, say you are going to be using this syntax for certain global configures, exported from a module. What we can do, as a human and as from a tooling perspective, is I know what those values are without having to see other modules in the universe and check every module that imports this value doesn’t mutate it. That’s a really nice property that I can be sure immediately that this thing is frozen. Again, I think that’s good for both of us in reading this code. I am going to comment on the PR, you are exporting this config. It could be weird if someone somewhere else, someone mutates it. It’s nice to freeze it. The convenience of this encourages these types of patterns. It gives humans like convenience. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g337bd48536b_3_0] + +ACE: But syntax isn’t crucial for this. Like, if there’s really no appetite for syntax, then we don’t have to have it. It could be APIs. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2af82517ce6_0_21] + +ACE: So I also, as I said, I have really come around to these being objects because again the point that PHE made was, if you are going to adopt these things, there’s already a lot of code out there that is kind of sniffing, you’re going to reflect the properties of values in the language. You might have some utility that is overloaded. You can maybe pass it a number or array of numbers and the way it works out the overload is using `Array.isArray`. And now, if we have these tuple things, like arrays but they don’t pass `Array.isArray`, then that utility no longer works. So you would have to update or not use this thing. So if these things are objects and they are arrays and they inherent from the prototypes that others might expect, there’s a larger chance that you could just adopt these. I think there’s also a benefit to people learning the language, the typeof is something tends to be something that immediately, like, Chapter 1, when you open the book, these are the core parts of the language. More like, think, immutable data structures are really, really, really useful and something you want to learn about sooner than later. I don’t it’s necessarily Chapter 1, page 1 concepts. But overall, I think I will like them being objects. The reason they weren’t objects in the proposal is because it helps explain from a kind of modelling aspect of how the language collects some other behaviors. But I think the weighting is actually crucial in the matrix. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2c6eebea946_0_5] + +ACE: So the other part about the proposal is that it was kind of always about we found the original commits. 2019, and back then, it was—these things are deeply immutable. The only thing you could put in them were other Records and Tuples and primitives. And again the feedback we got on that was that’s great. But it really cuts off a lot of language. Like, maybe for designing the language from scratch, we could do that. But the ship has already sailed. Pretty much everything in the language is mutable unless you do work to stop it being that way. And I just felt really sorry for the people that are going to use the new things we are building, like Temporal, which is fantastic, and has an immutable data model. So if someone thinks "I am using immutable data", these aren’t like old school dates, where they are internal mutable and you can change what type they point to. Everything in Temporal, data model is these things are immutable, but still actually mutable objects. You can add new properties to them. From like a perspective of records and tuples, the thing has to be frozen or stabled or fixed, like from MM, then it would cutoff large parts of the language, even things from a user’s perspective, do follow the roll of being I am mutable. And I think maybe they’re still ways that, you know, linting tool might be spot little mistakes where people have immutability. We talked about having a box like object that lets you opt out, you have to code into the data model and makes it much harder to adopt. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g337bd48536b_3_6] + +ACE: So my preference is actually to weaken this. I was trying too think. We could maybe say these things are deeply mutable and like what else in the language would that kind of correspond to? In some ways, that could maybe correspond to, like, the shared structs models, shared structs can contain structs and shared arrays and most primitives. If we go in that direction, we model things on shared structs, so there’s cohesive ideas. I don’t want to do that if—the committee felt like that, and I am the only one thinking we shouldn’t do this. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g3315cd10b42_0_0] + +ACE: So yeah. There’s a tradeoff. And I am on the—I think overall the flexibility is worth them only being shallows. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_19] + +ACE: So part of me was wondering how much that fills up TCQ and I was going to stop here and drain it down. So moving on to the something that is more—the next bit, I think adding—doubles are great from giving developers really ergonomic access to immutable data structures. But they seem—Waldemar in the queue. I would like to hear about his point right now. + +WH: Regarding the previous Venn diagram you had, as far as I can tell, the records and tuples are ad hoc. It would be useful to be able to create immutable structs. How would that work? + +ACE: Could you say—I think yes, it would—like, I think I agree. It would be immutable—rephrase the question + +WH: The syntaxes you have shown all create objects with arbitrary shapes. The thing about a struct is, if you know the type of a struct, you know its shape. + +ACE: Yeah. So I think – + +WH: How would you create immutable ones which are actually structs? + +ACE: One option, you think about these things as anonymous structs. You could create structs like evalling, creating structs on the fly. And that I think already works in the shared structs proposal. You could imagine it’s similar to that, when you create the record, that is defining a struct that has those builds and it’s fixed. Or other syntax + +WH: As a user I have a dilemma. Let’s say I want to create immutable Points. I could either write, #{x:4, y:-17}, which gives me immutability but doesn’t provide a shaped type. Or I could define a struct Point and create instances of it, which gives me a consistent shape of all Points but does not give me immutability. Some people will choose the first one. Some people will choose the second one. We’ll end up creating a stylistic schism with religious wars on the boundary between them. + +DE: I put myself on the queue. ACE’s thoughts that all records and tuples are structs already. Whether it’s a shared structs, you know, at the time of creation of one of these records and tuples, it will be already stable, whether or not each of the things inside of it is a shared structs compatible are or not. It is shared—your record and itemize is shared if all of the things inside of it are shared. It becomes shared structs or non-shared structs. Then the—we still at that point, do have the false coupling between whether it’s nominal or things inside are immutable. You need initializer lists and stuff like that. But at least in terms of being immutable and shareable, I think that could be handled transparently + +WH: I wasn’t talking about shared structs. Simple structs. + +DE: Okay. Yeah. I don’t have a solution to that yet. This is falsely coupling mutability. And nominal and having methods. But I think that’s okay because similar to how we coupled privacy with classes, when you have something immutable it’s often data. False coupling. + +WH: Okay. + +SYG: Yeah. DE mentioned initializer lists. The biggest—I agree. That it’s pretty cool to have immutable structs, shared structs notwithstanding for now. The problem is basically, if you want user constructors, like the way it works today in the proposal is that you get this one shot initialization by the time any of this value is escaped to user code it has the own properties that are declared already on it. And then, you know, you can mutate it. But if you want something to be born immutable, that won’t work. So then you have the problem of, well, how do you limit the access to this thing while it’s in this initialization phase, after that is over, it is then immutable. + +ACE: Yeah + +SYG: I don’t know the solution to that. If we do, that would be nice. + +ACE: Yeah. But a similar thing in Java's Valhalla, for them it’s easier because the syntax isn't `this.field =`. You can say `field =` and it understands that. Like, I have ideas in mind. I can share in Matrix later. I'm pleased the possibility of an immutable struct is around. + +SYG: And to finish the thought, if we bring shared into the picture, there, it’s actually a harder requirement that the shared struct instance *must* not escape until it is basically fully done. If you want that shared structs to be immutable, you basically can’t run user code until it’s like fully baked and can escape to user code which means it can escape the local thread because if you kind of let it escape when it’s half done, you could get into badness basically. + +DE: SYG, does my story about records and tuples, if they contain all sharable things being shared themselves, does that seem plausible to you? From WH’s question. + +SYG: I think so. I am not sure—like, it’s possible. I will say it seems possible to me. I need to think through. I imagine it’s something like that if all the—if it’s immutable and all the things that are used to—in the literal sine syntax, I am privileges or shared structs that automatically marks the—the record or itemize that comes out of it as a shared thing. It’s a derived property from how you construct it. Is that how you had it in mind? + +DE: Yeah. + +SYG: That seems possible. But I don’t know if that is—I don’t want to say if that’s desirable yet because it’s—because one maybe the user—there is some cost in allocating a thing in shared space, if you don’t ever intend it to be shared. Right? So maybe you want to let the programmer control that intent need instead of sharable and therefore we put it in the shared space. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_19] + +ACE: Thanks. I am going to move on. So a kind of problem case that centres around, we have maps, we have had maps language for a while now. They are great and I’m pleased we have them. But really, it does just feel like the only thing I can really use as a key is strings or numbers. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_40] + +ACE: Like, in terms of generally programming, yes, I could put true and false as a key. Objects as keys and it happens. But generally when I am using Maps, use Maps, it seems like the majority of cases is, the key is a string or a number. Because that’s the thing that kind of works best. If you have `Map.groupBy` it works fantastic, when I am grouping by a single numeric or string value. But then I very quickly get complexity when I am now grouping by two values. If I thought I could return like a pair, that’s fundamentally wrong. I am creating a new object every time. So I am not grouping by anything really. So the thing I—I have no data on this, but anecdotally, people use strings because they are in the language. This works. You can construct a string that kind of represents the multiple bits of data. And now, that will groupBy those values. But this has a bunch of flaws. Or like annoyances, now I have a map filled with string keys. When I iterate over the map, it’s a nuisance to extract those values back out. And I also see people typically use things like `JSON.stringify` here. Which may work until when the key order changes between different objects and then it breaks and they don’t notice. So I don’t really feel like we have a great answer for what people should do here. Today. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g33754a471b1_0_0] + +ACE: The thing you technically can do, and I don’t see this happening a lot is, construct composite keys using objects. If you use the object identity with maps so you can get the keying behavior you need, but it’s still a descriptive object with separate bits of data inside and rather than it being compressed down into a string. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_76] + +ACE: And that—you can do that today, in user-land that need any any proposals to build that and there’s a build of npm libraries that do this. The way they fundamentally work, they kind of tak the vector of things you create, and then use a series of maps to kind of walk-through that vector to refine, in the infinite space of all possible JavaScript values, like the point where that value lives, and then they create an object there and it becomes the object that represents this. And then if you just used Maps for that, this would just leak like crazy. So reduce this leaking, you use WeakMaps for the object part, and then you can also use a FinalizationRegistry to clean things up as well. Most things I use, they use the WeakMap part of it and he they went use the FinalizationRegistry trick so they leak a bit. This is doable in user-land today. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g3315cd10b42_0_10] + +ACE: The pros, you can do this today. You can write JavaScript. And it just works because it’s using === equality. Everywhere in the language you have equality, it works. All the different equality iterations we have, they agree on, what happens with two objects. The con here, that is evident from this, is a lot of overhead here. You create like a key of n things. You are creating n maps, potentially for every key, even if it only varies a little bit, you create all new maps. When you go into one direction, after that is all new maps. You are creating a lot of objects. Another thing I think happens here, I don’t have like data on this, and I am not like a browser implementor, but I think the garbage collector hates this a little bit. I think what ends up happening, you get a lot of references from old space to new space. You don’t get something like the generational benefits with using this pattern if you really, really stressed it. I would be kind of interested in maybe that’s not the case or maybe there’s some interesting papers on how to mitigate that. I do think that there’s a lot of overhead to this approach. I think an approach built into the language that will use the similar technique, I think they could maybe do slightly better than userland, but I don’t think they would be magically super fast to the userland. They would have similar overheads and it does sound like there are complexities with GC. + +WH: WeakMaps work for keys which are objects. But look at the last two key parts on the slide, which are primitives and can’t be put into a WeakMap. Don’t these leak? + +ACE: Yes, so the trick here is, this is like lots of like NPM libraries, they leak if you create like you keep creating keys, they all share the same object, but have a different BigInt and keep leak because they rely on the objects going away. But the way to mitigate that, which I have done in my implementation is, you have a FinalizationRegistry on the composite key and if that is finalized, you clean up in the reverse direction. You move the entry from this where the map is zero, you tell the map above you to remove the entry. It cleans up in the reverse direction. So that kind of helps these things not leak. It’s very easy to make them leak. If you hold them wrong. + +ACE: The other kind of downside of keys—let’s say records and tuples, similar to what SYG was saying, if these were shared structs there’s a cost to allocating shared structs. If you were creating the immutable data structs, just to get the immutableness of them, and not because they are planning on doing `===`, they will pay a cost creation time that they will never claim. You have to do this work eagerly to do this at all. That’s a bit after shame. People might say “please don’t use this here because they’re too slow to create”. + +ACE: I’m pretty sure this causes GC complexity that we could see as an opportunity for some really good GC research or say that’s the reason we wouldn’t go in this direction. And you have to—I think there are people in committee that want negative zero and in turn means you don’t get negative zero. It’s another thing that the NPM library always get wrong. They always forget about negative zero. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_47] + +ACE: So going back to my meditating in the desert image, I was thinking you don’t necessarily have to intern these thing. What we have is an opportunity. If we added these to the language, we kind of get this one time or this one opportunity to decide what the semantics are. And we could say that these new things added to the language have new behavior, like, they introduce a new capability to steal a word from DE’s slides that haven’t been presented yet, we can say in particular APIs in the language even if the things are not triple equal to each other, they could still be treated equal in certain APIs. + +ACE: So maybe we could actually say when you use these things in a map, the map kind of checks them, sees they’re records and tuples and applies the notation and this line here would work. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g2af82517ce6_0_31] + +ACE: Why could we possibly do that? I think we could do that if we wanted to because they wouldn’t kind of violate any of the kind of requirements we would put on maps and sets today. Like, these things would be stable, fixed in MM’s kind of world. The equality that they could offer would meet all of the existing kind of equality rules that we need for the map and a set. So they wouldn’t violate kind of any of the things that we would want a map and a set key to do today. And it would be backwards compatible because these things don’t exist. + +KG: It has not been clear to me throughout this presentation whether these were triple equals or not. + +ACE: It’s not clear to me either. I think they can’t be, because I don’t think we’ll be able to – I’m putting words in implementer’s mouth. I think implementers will say, yes, that will be too slow because maybe there’s a way that interning costs could be reduced. From everything I’ve been told and everything I kind of understand about the engines, I feel like the same view that we adopt from these being primitives applies here and that them being triple equal would be nice from a semantic point of view but I think the fact that these things are actually going to run and be performative means they wouldn’t be. + +DE: I’ve been assuming they’re not triple equals as well. And this is because the previous time that records and tuples were proposed for Stage 3, we got extremely explicit feedback from implementers that neither strategy would work, not interning because the cost is too high and doing the deep comparison in place because it’s too important that triple equals on objects is just a pointer comparison. So the proposal that Ashley is talking about tries to work within that + +JGT: When I hear about records and tuples I think about react and the way react deal with its property updates. For folks who aren’t familiar, react will rebrand the component if any of the properties that you pass it are different. And different is `Object.is`. And so what is interesting about this case that might be possible is there something that could work with react that would be backwards compatible enough to make it into React that would solve that problem? What is nice about it is it’s not the user doing triple equals or calling `Object.is`, it’s the framework. This problem is actually a huge problem for react development where if it’s a string, you just pass in the prop. If it’s not a string, you have to bend over backwards and do all this crazy stuff to make it work. There’s many libraries whose job it is to make it easier. You still screw it up all the time. I wonder if the way to approach this problem might be to look at various use cases, not unlike the previous discussion we had earlier today and that even if you can’t solve all of them, can you find particular use cases like maps, like react or like whatever that might have some at least partial solutions that could add value. So I think my main recommendation would be to try to carve out the use case space and to say, okay, you know, here is some—I’m imagining some grid of here is six use cases for this kind of problem, here is the various approaches and here is which one is better and might be a good way to visualize this. + +DE: We’ve done that exercise very extensively in the past six years about this question in particular. We have gotten feedback from the react team they don’t want deep equality done in these cases in particular. If you have the tree of state passed down through props and that sort of AsyncIterator and stepped down and if we did a comparison that would be much work. This suits requirements by providing identity comparison that they would continue to use by default. In certain cases, maybe you want to opt into structural comparison but the good thing about this proposal that we got negative feedback from the react team about the previous version of the proposal that only offers structural comparison. They said we want the fast where we just do identity comparison. So in that sense, this meets the – + +JGT: Can you clarify if you don’t have triple equal support, how do you get identity comparison? + +DE: Let’s go through the rest of the slides. Because it answers this question. And then we can do the queue afterwards. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_23] + +ACE: The way I was imagining this would work is when you create these things, they’re tagged with the internal slot and that could be a brand check from that. The equality would still be recursive even if these things are only shallow immutable. If the values inside them are still other things with this tag, then the equality would still be deep as far as you stay within records and tuples. So as soon as you hit a mutable object, you are falling back to referential of equality of that point. These things like implementation details would be part of the spec, but they would spec the same way to Maps and Sets. They can still use hash codes to help when you’re putting these in Maps and Sets and not necessarily able to compute this every single time. You can cache the hashcode much like you can do it with strings. This works like crucially a big thing about why this would work in Maps and Sets to—without changing them is that these things have this tag from birth. You can’t put an object in a map and a set whereas compared by reference and then later install this slot changing its equality. When you put it into the Map and into the Set, its equality semantic never change. And then going back to the earlier why I think syntax is nice here, because these things have to have the slot from birth, it means if the API is give me an object and then I will turn it into a record, it means that you have to kind of create like double allocate everything as opposed to syntax thing which means you can immediately jump to creating the kind of final result. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g3350f6676e7_0_14] + +ACE: So the advantages of this approach would be we don’t pay the complexity of interning. We have the choice if we wanted to to get these things to work with the APIs we already have in the language. Not all APIs wouldn’t work with ===, but they could work with Maps and Sets and other upcoming proposals like uniqBy is replaced by composite keys could be useful. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g3350f6676e7_0_9] + +ACE: The down sides of why we might not want to do this is that JavaScript already has four different forms of equality. Adding five, you know, is a hard pill to swallow. Interning wouldn’t cause us to do that. We could completely replace SameValueZero everywhere we use it we do this. That’s probably a terrifying prospect. Modifying maps and sets to have new semantics, you know, that’s really annoying for polyfills to do more. Maybe that’s why we wouldn’t want to do this. I think purely coming at this from the end JavaScript user, I think they are really the ones that would benefit if we did do this, because if we had new APIs and created a new type of Map, CompositeMap and then a JavaScript said when would I choose a regular Map and CompositeMap? The answer 99% of the time would be you can use a CompositeMap that goes if you’re putting strings and numbers in it, it would keep working the same. If you are putting the records and tuples in these things, you always want the record and tuple value. It would feel like a shame to have a map and then a second time of map and really we’re almost presenting like there’s a choice. But there’s kind of really isn’t a choice. The reason we’re doing it is that these things layer that is similar to kind of polyfill, maybe similar to implement the engines as well at this point. Purely from the JavaScript perspective, I think it makes sense that these don’t introduce new things. I can see the argument of why we wouldn’t go that route? Let’s discuss. You can only back to the react thing. I can see there’s stuff on the queue. + +DE: Just a clarifying question about the last slide, what are you suggesting that it be? Should it be replacing SameValueZero or suggesting something else? + +ACE: So if it was up to me, I would replace SameValueZero. But I can just sense from meeting people that that won’t be palatable. I would love to be able to convince the committee that it was. I just can see that being an uphill struggle to convince people. I’m prepared to fight for it. + +ACE: With the React thing, we can’t really in a backwards compatible way, these things wouldn’t be object the same if they’re not literally the same object. If react wanted to switch, they would need to switch to the equality predicate. When we talked with the React team years ago about whether they would recommend records and tuples, they said probably not with the old semantics because they actually prefer the React compare approach and more granular local updates. The thing with the React compiler thing is you’re always—you’re kind of creating a very unique comparisons and unique for call site and say you’re creating a record with ten properties, only one of them changes. The compiler can say I only need to check that one property but one property is changed. And then I know if the rest—I don’t have to check the rest because I can see at the React compiler stage, they don’t change. So that’s the way that React is are now solving the problem. Where Records and Tuples don’t help if it’s the local thing and multiple data sources creating data all flowing into one React component. And then that React component still wanting to normalize. That’s the case that I’m still hearing from React developers. They’re saying even with React compiler, we still want === for that case. But I think it’s really reduced the react to use case now that you have the React compiler. + +JRL: Specifically between DE and JGT, JGT suggested use records and tuples for props and DE responded with trees and different point of view. If we had it for props, the discussion we just had with React compiler making it less necessary but we still need it for the individual sub-props inside of it. It could be used there. But for the React VDOM tree is absolutely horrible. It turns what is currently linear algorithm is quadratic and that was Daniel’s point and two different discussions discussed. + +DE: I was talking about the state tree and it’s horrible for – + +ACE: A lot of people said records and tuples would be great for the VDOM but it’s exact opposite and conduct for the VDOM. + +WH: Maybe I missed it, but what does equalRecords do when it gets to a nonrecord? Is it sameValueZero or something else? + +ACE: Yeah, sameValueZero. We would have to do a modified version of same value zero. It would be the new SameValueZero that takes Records and Tuples into account. + +KG: Since this is a temperature check, I am positive. That’s the main thing out of Records and Tuples and composite keys that work with automatic APIs like groupBy and uniqBy and Map and Set would be great. There’s a handful of details. Also doesn’t necessarily require syntax to work. That was an old proposal for doing this just as a composite key built in. + +ACE: Yeah absolutely. + +KG: And anyway, this is the main thing that I personally want out of records and tuples. I would be happy with this direction. + +MF [via queue]: +1 to everything that KG said + +LCA: I phrase this as unfortunate we cannot make this work. So it would be very, very cool if it was still possible for the built-ins that are sort of deeply immutable like, for example, all the new Temporal types if they could work inside of composite type—sorry, composite key. I don’t know exactly how that could work. If this is Tuple or is Record internal slot, like, would it be possible for us to down the line once this ships or if this ever ships to add that to the Temporal types, for example? I mean, there’s obviously the backwards compatibility concern if you have a set that consists of multiple different like Temporal type objects, whether that would work and there’s the other question of like if there’s no way of doing this in user land like you can never polyfill Temporal correctly, which would suck. But it would be really cool if we could investigate this as part of that. + +ACE: The way I think you—if we can go back in time and maybe many years ago, we purposely chose that we wouldn’t tie Temporal to Records and Tuples, that I think was the right choice because Temporal would be even further away from Stage 4 and it’s so great that Temporal is close. I think the way we could do it is developers—we could have toImmutable or toRecord or toFixed something that you could upgrade it to the point where it’s—I don’t think we would be able to be compatible and just automatically frozen in this way. I think it’s too late for that. I don’t think we’re completely cut off from it being nothing in this space. Did you want to add to that? + +DE: Yeah. I don’t see that path that you’re describing as particularly reasonable. It would be kind of annoying to write that all over your code using Temporal. But in general for Temporal objects because you could put them as Map or Set keys, this is something that we would have to decide now. You don’t need records and tuples to make the structural comparison of them relevant. Previously with Temporal, we had custom calendars and TimeZones that offered their own extra identity issues. Now the only thing in play is sort of the prototype itself which I imagine we wouldn’t really want to participate in comparison. Any way, I think it makes sense to ship Temporal as is without the structural comparison. So I guess I agree with the way you originally phrased your statement. + +LCA: If I could respond to that, like, I agree with you that we should absolutely not hold Temporal back for this. I’ll let the rest of the queue go on. + +MAH: I was wondering if it would be solved not just for this but in general if you one able to sold the solution of creating custom objects that are immutable that you can put with the semantics in a record if you could just set the prototype when you sign the record. + +[Slide: https://docs.google.com/presentation/d/1uONn7T91lfZDV4frCsxpwd1QB_pU3P7F6V2j9jEPnA8/edit#slide=id.g336fbd25823_0_52] + +ACE: So I had a hidden slide. I think we could say this was valid syntax and we could create things with the custom prototype so you can still create things that they are themselves immutable but they have this inheritable methods and you can create something that still has the benefits of operating it in that domain rather than being just plain data with no methods. I think, yes. + +LCA: Importantly the prototype would not participate in equality. + +ACE: I think they would participate in—the two things have to have the same prototype. But that would be it. It would be effectively the same as the other things. + +KG: I think this would be pretty tricky. I think that the questions around this get kind of funny, and I think especially if you are thinking of this primarily as composite keys, the answer would be clearly no, because it’s just a key, it doesn’t make sense to think of it as something other than a prototype. Whereas if you’re thinking of these as more general purpose objects you have to think of prototypes. In other languages you want equality to hold for subtypes sometimes and not other time and it’s a much bigger world of questions to explore once you allow prototypes other than null. + +ACE: I do feel like there is space for custom equality. But that would be a symbol protocol and that definitely wouldn’t work out of the box for Maps and Sets. Because I think kind of violates the thing of once in a Map and a Set, its equality can’t change. For those cases where you want subclasses and then maybe the weighted equality works varies, do you care about the case instinctive string and things? To me all of that side seems like do reverse symbol protocol. The thing is there’s no symbol protocol. It’s kind of set and fixed. + +LCA: Just to respond to this one more time, I think—I agree this is complicated. But as we have seen with shared structs not having prototypes is unergonomic in many cases and a lot of complexity that shared structs is now adding to enable prototypes. It depends on the use cases. It would be nice if you could have a 2d point that you can add to the map and it works correctly. You could still have methods on it. I agree it’s complicated. + +KG: Depends what the use cases are. If it’s just a composite key you don’t care about those things. If it’s more general, you care about those things. It really depends on how we’re phrasing this or what we think the main value is. For me the main value is just composite keys. But I think people have other things that they care about. Not the only user. + +MAH: I like direction obviously a lot of discussion with the Matrix and the question if we have these as objects and they work transparently with Maps and Set and so on, the question is how does new code then introduce these objects and old code that uses Map, Sets, WeakMap and so on behave when they encounter these objects? Like, right now, if you have two things that are not equal, except for NaN they will end up separate entries as Maps or Sets. If you have something that is an object, you are expected to be able to hold it weekly. So there’s a lot of details here that are—that will need to be figured out. And I hope we can figure something out, but I’m worried that there’s going to be more difficulties down the road. + +ACE: I’m looking forward to seeing the conversation in matrix. + +DE: How do you think we should investigate that? + +MAH: I don’t know. I usually feel that libraries are allowed to have these expectations and I don’t know if we can break them that easily. + +ACE: My hypothesis is if this model that we are worried about in the committee wouldn’t hold up in practice and people—libraries taking in third party objects and putting them in Maps and Sets aren’t relying on this kind of the way we think they could theoretically do. I’m biased. That’s what I’m hoping for. + +SYG: This slide, what is no interning overhead mean? Might have missed it. + +ACE: Sure. So creating these things would be roughly—putting the shared struct aside would be roughly as expensive as creating a regular object plus perhaps just a few additional checks. Wouldn’t actually have to then go to a global table, see if an identical one already exists and then actually use that existing object. So you’re not actually having to do structural sharing. + +SYG: I see. + +ACE: These are regular objects and maybe one internal extra filled and maybe precompute like a hash value. Not saying zero cost, but I think less cost than a structural sharing approach. + +SYG: I see, okay. + +JGT: I will just clarify I think JRL was accurate there’s two very different issues around react. I think the biggest developer experience would be for props that nobody really manipulates the VDOM unless they’re really advanced. React developers can do the hard stuff. Anything like Luca’s example like 2D point and have the graphing component of 2D point is easier than crack out the XYZ every time you use it. + +ACE: This is exactly what I was hoping it would do. The feedback was very welcome. So thank you. + +### Speaker's Summary of Key Points + +* Recapped some of the history of the R&T proposal, specifically that the design would need to not add new primitives and that there is no appetite to overload `===`. +* Stated a potential new design that works within the previously stated constraints. The design includes syntax, though this could be optional, shallow immutability, and providing compositeKey equality for existing APIs but does not provide new `===` equality semantics +* There was also discussion on how the proposal might interact with the structs proposal + +### Conclusion + +* Feedback was generally positive to continue exploring this direction. +* Some feedback on potential for complexity when getting into the details, such as if existing code is expecting objects in map to only use same reference equality + +## Use cases for ShadowRealm + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-shadowrealm) + +PFC: Yesterday in my segment on ShadowRealms, we talked a bit about use cases. And I thought I would do a short addendum on how I see this request for use cases, because I think in general if we’re talking about proposals that intersect TC39 and the wider web platform world, I think often we’re talking about different things when we talk about use cases. So I want to say upfront I’ve been involved with TC39 for five years, I have a fairly good idea of what we want in this committee. That is not the case for the wider web platform world. I could be wrong. I invite you to tell me how I’m wrong at the end of the presentation. So as part of my work on ShadowRealms, I ran across this document writing effective explainers for W3C TAG review. If you haven’t seen it, I haven’t published these slides anywhere yet. I will put the link in the chat afterwards. And it is a very nice document that explains what the W3C TAG wants when you ask them for a review of a proposal. So, again, don’t take what I say as an official pronouncement but this is my interpretation. These are quotes from that document that I just mentioned. They ask you to describe the problem that your proposed feature aims to solve from an end user’s perspective. That is not emphasis that I added; it is in bold in the document. They seem to find it important. There is another paragraph down below: Start with a clear description of the end user problem you’re trying to solve even if the connection is complex or you discovered the problem by talking to web developers who emphasized their own needs. That’s an interesting phrasing that says to me that you may conceive of a feature because it fulfills developer’s needs but you need to describe it in a way that fulfills end user’s needs if you want them to pay attention to it. + +PFC: So again this is my interpretation. I take this to mean that cultural norms in that community dictate that "this feature will allow developers do such and such a cool thing" is not going to be taken seriously as a use case. I think that’s what I mean when I say that sometimes we are talking at cross purposes about use cases when we have proposals that intersect these two worlds. + +PFC: So that doesn’t happen very often. I think it does happen in the ShadowRealm case because one of the things we want before advancing ShadowRealm to stage 3 in this committee is integration with web platform APIs and need web platform buy in. It happens for a few other proposals like AsyncContext. But for most of the proposals we talk about this is our house and our rules and we decide the use cases that we like. ShadowRealm lives in two houses and has to abide by two sets of rules. + +PFC: So this is not verbatim any particular phrasing of use case that we have thought up, provided, but this is my paraphrasing. This is the kind of use case for ShadowRealm we’ve been talking about so far. ShadowRealm lets you run third party scripts quickly and synchronously with integrity preservation and allows accommodating building blocks from different authors that might conflict with each other. I think this is a perfect valid use case from our perspective. This makes me think that ShadowRealm is a valuable addition to the language. But it doesn’t mention anything about the end user. I think when we hear from web platform “give us use cases” and we give them this, that’s not what we’re asking for. + +PFC: So this in my opinion might be a way to rewrite the thing that I have in the previous slide from an end user perspective. Large platforms like web applications often allow customization via plug-ins. In JavaScript most built in stuff is overwritable and so badly behaved plug-ins are always a concern. When application writers have a way of segments off and isolating code they don’t control, they can deliver a more stable experience to users. This is short for the slide but I would say something about how maybe for a customer of that platform, you would install 19 plugins and 10 of them are written by the customer itself and have stability in that case because you can’t count on the code quality of the plug-ins, whatever. I think focusing on rather than "this allows developers to do such and such", focusing on what can the developers build that they couldn’t build before is the kind of thing that we need to provide when we’re giving use cases to web platform folks. + +PFC: So that’s my interpretation. I’ve discussed this with a few people and heard reactions ranging from 'sure, that makes sense' to 'I don’t think you’re right about this.' So I’d like to invite discussion here. What do you think about this? What has been your experience with proposals that intersect those two communities? + +MM: So I want to be explicit about supply chain use as a use case. It’s kind of implicit in a lot of what you said. I think it’s worth making explicit and it has a more vivid case for the end user than is obvious from the way you put supply chain risk in your presentation. There have been attack after attack after attack where some third party component was revised to attack the users of programs that use that component. Several of these are very famous cases. Now, JavaScript and LavaMoat by MetaMask, and XS are all trying to provide good mechanisms for supply chain risk when the elements can live within their restrictions of hardened JavaScript. If we fix the overriding mistake, then a tremendous larger number of existing npm packages will in fact be compatible in JavaScript that we can apply all of the supply chain risk in the restriction. A lot can’t live in the restrictions of hardened JavaScript, in which case you cannot protect from each other in a single realm. + +MM: What the ShadowRealm give us especially with the boundary that the committee put on us which we ended up being overjoyed to have accepted is that you can take programs that cannot be run under HardenedJS because, for example, they modify the primordials and run time in ways that HardenedJS must prevent. The ShadowRealms enables a much heavier protection domain which is the realm but enables the same protections between the protection domains without constraining them—you know, the code within the protection, each protection domain within its own that are still protected within others. I think with the JavaScript system, I just want to reiterate the figure that I heard many years ago that I believe is still correct which is the technical JavaScript application according to NPM statistics as of years ago is 3% code specific to the application and 97% code linked in from third party libraries often through third party that many are unaware of. Many supply chain attacks come from dependencies deep down the dependency chain. + +PFC: Okay. I think supply chain risk is a good thing—it’s very easy to explain that in terms of benefits to the user. Thanks. + +DE: I’m wondering if we could get more feedback from browsers on what they think of Phillip’s explained use case. What kinds of evidence would be interesting for you in evaluating whether this is a good use case? It’s okay if you don’t have the answer now. Maybe you could get back to us between—or the champions between now and, you know, some time in the future. + +SYG: I’m not the one doing the evaluation. I think the two houses metaphor is apt here. This is kind of stuck right now because the bar in WHATWG is active from the browser to implement this on the non-JS engine side. That’s the bar that you need to clear. And asking this room what the browser representatives from the JS engines think of your use case doesn’t progress towards that goal as far as I can tell. + +PFC: Yeah, that’s my understanding as well. It's also why in my presentation yesterday I didn't spend very much time talking about use cases. Because my impression was that we talked about those already and this room is pretty much convinced and it’s elsewhere that we need to do the convincing. But if you do have any remarks or meta discussion about the way that we present our use cases, or whether you think my interpretation of what is going to be convincing is correct or incorrect, I would love to hear that. + +SYG: I think that makes it sound like you are asking the JS—the browser JS engine representatives if we would like to help champion this proposal along with you. Is that what you’re asking? + +PFC: No. I’m not asking anybody to do anything. I put up an understanding that I have of the way that things need to be communicated. I’m asking you or anybody in general if that rings true to you. + +SYG: I see. + +MS: So basically SYG said we talked about this yesterday. You know, this is more on the DOM side and the W3C APIs at this point. + +JHD: So ideally our process is set up so that the stage advancements which are intended to be signals are those signals. What I’m hearing is that the browser TC39 reps aren’t the ones—like, it’s a different group or team not making those decisions for everything at least in the browser. We certainly don’t have the capability of fixing bureaucracy ever. So I guess more I would love to understand if somebody understands it and I just missed it, I will be quiet. But what should we have checked before 2.7 or 3 or whatever? Who should we have checked with and so on? To avoid Stage 3 things not being prioritized. And obviously that’s a long-term question. I don’t expect the answer right now. But whatever the answers are, it would great if we found a way to incorporate that into the process so it’s not a problem in the future. That’s all. + +KG: I guess I’m on the queue. I think we have. And we said for things which require integration with the greater web platform, that needs to happen as part of this Stage 2.7 to 3 advancement. We demoted ShadowRealm to 2.7 partly for this reason so we could then have the integration happen. So at least my understanding is that that’s what we have done, we have been saying that you can get 2.7 but 3 requires if your proposal requires integration with host APIs, you have to get sign off from the people that you need to integrate with before you can get Stage 3. + +JHD: Did we not get that from ShadowRealm? + +PFC: No. That’s what I was asking. + +NRO: Kevin said, we’re learning the lessons even if it’s painful. It was in cases with ShadowRealm but then for separate proposals like AsyncContext before 2.7. + +MAH: There’s still something that I’m confused about this specific proposal, this is a JavaScript API and the browser asked that we should have the host being able to add their own API to the global aspect of ShadowRealms so this was considered and accepted. And now from what I understand, we’re hearing that the part of the browser that decided which API go on the global are getting to relitigate whether the feature at all should exist, not just which API—agreeing which API or not are valid to be on there but more whether the JavaScript API that went through the staging process here, whether this is an a valid use case for the web at all. Why is it that this feature requires approval from W3C or WHATWG I’m not sure which one to be added to the listening at all in this case? + +PFC: My understanding is the signal in this committee is stronger than what you were saying. The signal in this committee was we don’t want this feature to exist if it doesn’t integrate the host APIs. The signal was not that we want this feature regardless and the host can add APIs if they want to. That is my understanding. Somebody can correct me if I’m wrong about that. + +KG: I’m on the queue saying that almost word for word. I don’t want this feature to exist if it doesn’t have TextEncoder and stuff. People shouldn’t have to know about the split where TextEncoder is in a different specification than urlencode or whatever. This is completely irrelevant to almost all users of JavaScript. I don’t want to add any features that makes that distinction relevant to them. I don’t know in general which proposals need sign off from WHATWG, but for this specific proposal I don’t want this to exist until it has the handful of APIs that have been carefully outlined that make sense is purely computational. So the requirement is coming from inside the committee. + +MAH: So we brought this on ourselves? + +PFC: Yeah. + +JSL: In my experience, another key part of the process that tends to get missed is just agreement on the problem statement up front, right, and WHATWG, a lot of times what they want to see is give them a chance to agree that the problem is a problem that they’re interested in solving. Before the use cases gets presented. You know, is it something that all of the various browsers and various implementers are all on the same page? I think a lot of times it ends up getting skipped. "We agree and we think it’s the problem to be solved. What do you think of this solution?" It’s like, no, no, "we think it’s a problem. Do you agree?" Does that make sense? + +PFC: That does. + +MAH: So is this going to be a problem with WinterTC APIs? + +JSL: Yes, hundred percent. + +### Speaker's Summary of Key Points + +* We discussed the presenter's interpretation of the differences between what it means when we say "use cases" in TC39 and what it means when someone from the W3C community says "use cases". In the web platform world there is a strong emphasis on the benefits to the end-user. + +### Conclusion + +* None diff --git a/meetings/2025-02/february-20.md b/meetings/2025-02/february-20.md new file mode 100644 index 0000000..cf7b021 --- /dev/null +++ b/meetings/2025-02/february-20.md @@ -0,0 +1,802 @@ +# 106th TC39 Meeting | 20 February 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------|--------------|---------------------| +| Chris de Almeida | CDA | IBM | +| Samina Husain | SHN | Ecma | +| Eemeli Aro | EAO | Mozilla | +| Daniel Ehrenberg | DE | Bloomberg | +| Daniel Minor | DLM | Mozilla | +| Ujjwal Sharma | USA | Igalia | +| Art Vandelay | AVY | Vandelay Industries | +| Jesse Alama | JMN | Igalia | +| Ron Buckton | RBN | Microsoft | +| Nicolò Ribaudo | NRO | Igalia | +| Kevin Gibbons | KG | F5 | +| Oliver Medhurst | OMT | Invited Expert | +| Luis Pardo | LFP | Microsoft | +| Dmitry Makhnev | DJM | JetBrains | +| Linus Groh | LGH | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Erik Marks | REK | Consensys | +| Chip Morningstar | CM | Consensys | +| Aki Rose Braun | AKI | Ecma International | +| Istvan Sebestyen | IS | Ecma | +| Michael Saboff | MLS | Apple | +| J. S. Choi | JSC | Invited Expert | + +## Decision Making through Consensus - take 2 + +Presenter: Michael Saboff (MLS) + +* [slides](https://github.com/msaboff/tc39/blob/master/TC39%20Consensus%20take%202.pdf) + +MLS: This is a meta conversation, meta discussion. How we work. I presented this at the February meeting last year, so I guess it’s an annual thing. Hopefully it’s not forever. What I would like to talk about is consensus, basically how we work as a committee. So some of this is a review from last time. There’s a few different definitions of consensus. It comes from the Latin word consensus that means agreement. A generally accepted opinion or decision among a group of people, the judgment arrived at by most of those concerned and a group solidarity in sentiment and belief. + +MLS: And since TC39 is part of Ecma, what do Ecma bylaws say about consensus? It’s interesting that the bylaws are actually silent about what consensus is. These are the three rules that you find in the Ecma—there’s bylaws and there’s rules. There are three rules by decision making and talks about simple majority and should not use voting unless it’s required and this is not something for us but the member of after TC has the right to ask for a minority report which they shall provide to be included into the semi-annual report. The interesting thing, these rules exist and all the other TCs and TC39 as well work by consensus, but in all the other TCs a consensus is basically the generally accepted view kind of policy. I think that most TCs do not regularly take votes on things. So what I’d like to do is look at our practice and then talk about it and see, you know, if there’s ways to possibly change this. So for mostly most cases we follow the notion of general agreement. We see that at this meeting and most meetings. After most discussions the moderator will ask do we have consensus or explicit support for Stage 3 of this proposal? + +MLS: And we look for delegates that either, you know, support with thumbs up or see it in TCQ and don’t need to speak, I support this. And that seems all well and good. That’s really good that we are able to operate that 9X% of the time. Occasionally someone will speak up and say I withhold consensus and give a reason as to why they withhold consensus. And that makes the process one of unanimity. We must all agree and sometimes we agree by being silent, but we must all agree for something to move forward or for something that we are discussing to happen. One dissenter blocks consensus. And that’s what I would like to talk about today. There is a truism that a single person with holding consensus is basically we call it a block. It’s a veto. We’re vetoing something. A single member of the committee has the power to decide what we do or actually in most cases what we don’t do and that’s what I would like to see if we can change. + +MLS: So here are some of the issues that I have with the current process. If my observation that withholding consensus is generally used by a small number of committee members, I would add that those who whohold consensus or block are typically more vocal and longer serving members and feel comfortable to speak up. Certainly we have members served on the committee for a long time and prominent in the JS world and know JavaScript and the committee and the language and things like that. But the committee as a whole is seeding greater authority to this small group of people. + +MLS: And there’s actually been cases although rare where a single blocker has ended the discussion of a proposal, basically shut that proposal down. And there’s also been cases that I’m personally aware of that somebody that has been blocked has stopped attending. They don’t attend TC39 anymore. Now, I want us to consider as a committee—I can’t remember who I was talking to. I started attending in 2015. It’s hard to believe that nearly ten years has gone by. But I’m considered a newcomer to the committee. And they view this single dissent policy in action. For some it might energize them. Look at the power that I have if I don’t like something, I can block it. But it’s probably more the case that someone is checking out TC39 for the first time for a few times, they look at how the committee operates and it would turn them off. + +MLS: There’s different personality types. I’m willing to speak up and get involved in the argument, but there’s other people that are more timid. And somebody like that that wants to bring a proposal, they can be put off by our single veto policy. + +MLS: The last thing I want to point out is that we need to acknowledge that our lone veto policy can hurt the relationships within the committee. Yes, we have competitors in the committee, I work for one of the browser vendors, and there’s other browsers that are represented at every meeting. And, you know, my company may see a slightly different view of how JavaScript should do its evolution and we have to come together from the diverse backgrounds. I work on the JavaScript engine and I write some JavaScript and it’s mostly test. I’m a C++ programmer. I need to hear what JavaScript developers want in the language. So we have to come together for the benefit of the whole community. That’s developers and implementers. Now, I don’t want to impugn at all the motives that someone may have in blocking someone although they may think there are past instances to question the motives of specific instances. For me it’s the impact of having a single person being able to block. So our current veto what I call power versus supporter power. Somebody who supports something versus somebody who supports it that basically one veto is one block. + +MLS: Facetiously I said let’s put it in JavaScript. We understand it or should. This is a way of representing how our current structure works. You know, each delegate has the same quote, unquote power when attending a meeting. Collectively we’ll call the total power of one and so you divide the delegate power is a fraction with the denominator is the number of delegates. But the voter power also has one and the denominator is the number of blockers. And so as soon as you remove a blocker from a supporter from the number of supporters, you’re going to—the vetoes will win. And this maybe is more advanced than JavaScript needs to be. If the number of vetoes is greater than zero, the motion is going to fail, whatever it is. I do want to point out at this point that according to Ecma bylaws, only delegates should be allowed to vote. An invited expert, I don’t think, should be considered as somebody who is blocking. We’ve been generous in that, but I just want to point that out and that’s maybe a separate discussion to have. + +MLS: So what I’m proposing is that we have a policy where we need 5% of delegates to block something with a minimum of two. If you have 40 people, 5% is 2. So less than 40 people and we typically are more than that. But less than 40 people, I would think that 2. Why did I pick a minimum of 2? My theory is that if I was to block something on some kind of principle that I would be able to convince at least one other delegate that’s attending that my reason for withholding consensus or blocking is reasonable and they would support me. If I can’t do that someone can’t do that, then I think that that’s a reason why they shouldn’t be able to block. + +MLS: So once again, I put this in JavaScript. This is a set of instructions that describes this. But basically, we give what I call the power of a veto versus the power of a delegate, that they’re equal and that we do it based upon some percentage. Like I said, 95% or more of passing, then something passes or 95% or more of those that support something that they—that it pass and less than that that it would fail. So this is basically my proposal. And I haven’t—I put 45 minutes. I expect there will be a lot of conversation. Translating this back to English, this is what I propose is that to block or what we call withholding consensus, we need 5% or a minimum of 2 vetos and which ever is greater to block something. So I don’t see the queue in front of me. But let’s go to the queue and let’s have some conversation. + +USA: Reminder that we have a little over 30 minutes. So let’s navigate the queue accordingly. If you permit, I’ll start in order. First we have JHD. + +JHD: Yeah. So your presentation is two parts: The problem and the proposal. I completely agree with every aspect of the problem that you described. I wanted to talk a little bit about the benefits of our consensus process in that I think that we are one of the best functioning standards organizations out there based on my experience in others and conversations with folks who have had experience in others. I think that is because our consensus process assuming everyone is always in good faith, our consensus process ensures that all of the—what’s the word I’m looking for? Each of us in here represents some percentage of the ecosystem in some way. Hopefully we have a hundred percent of the ecosystem covered in this room. That is probably incredibly wishful thinking but hopefully it’s at least approaching that and that’s the goal. That does not mean that everybody in the ecosystem has a conceptual representative in this room, because we don’t have hundred percent. But the hope is that that is the case, is that even if only one human in this room conceptual represents someone, that person has the voice by proxy. The consensus process ensures that majorities can’t overrun the minority. That of course resulted in spec designs that aren’t ideal at times, I think most of the time it has resulted in better specifications. And especially in general for language assigned, but I think especially in JavaScript with web compatibility to be concerned with, the much higher priority than getting things shipped is not shipping the wrong things in the sense there’s that quote in software engineering no is temporary and yes is forever. It is safer to say no and iterate and think than say yes because we can’t walk it back. That’s conceptually true in majority of scenarios you can’t walk it back in the company’s product or software product that is installed with a versioning system. And node, for example, have major versions and break things and drop things. That doesn’t mean that they can actually remove stuff if enough people use it. But in JavaScript in the web, like, you know, the threshold is much lower for something to be unremovable. So I think we should acknowledge that even though all of the problems you describe are real that there are a lot of benefits in that it makes sure that we go—I was reading this quote the other night, actually. I think it’s from the navy slow is smooth, smooth is fast, and I think consensus helps us go smoothly. + +MLS: Let me counter with a couple comments. I’m advocating for consensus. We don’t have consensus. We have a single veto. + +JHD: Sure. Let me rephrase. Unanimous consensus, hundred percent consensus. + +MLS: That is unanimity. + +JHD: Yes. + +MLS: As far as us being one of the more smoother operating committees, I would disagree because—and you’re probably aware, there are instances in the past where are our single veto has been what I would call a code of conduct violation. + +JHD: Absolutely. + +MLS: Okay. And that’s not acceptable. + +JHD: I agree. + +MLS: Okay. And then I go back to the point that if I want to block something, I should be able to find one other person present that would agree with me even if they’re not from the same, you know, faction of or resonating the same part of the ecosystem, they would agree with me on principle that, yeah, you’re probably right. + +JHD: So I would agree with you except that one of the problems you cited is not everyone has the personality type that they want to stick their neck out and speak out. Those very people are going to be the ones that are not going to be standing up in solidarity with the otherwise lone veto. In fact, in practice, even this room that is arguably skewed towards people who will speak up, there are a number of times when I have been a lone veto and had three or four people privately tell me they support what I’m doing but they just didn’t want to speak up because there was no need. Maybe this would create the need for the second person to speak up or whatever inhibited them from showing solidarity in the first place would still present itself and then a thing that should be blocked isn’t. + +MLS: So I think it’s harder to be the first veto and it’s easier to be the second and joining somebody. I would hope that would be the case. Maybe we could work on that, that we promote people to do that. I find it a little frustrating that you had the instances where you block something and people afterwards in the hallway say I support what you’re doing or speaking up in solidarity or anybody else. + +JHD: There’s lots of frustration including when I’m the one blocked by lone veto. + +RPR: I’m really pleased to hear the agreements, the levels of agreement that we seem to all be—JHD and MLS acknowledging the same kind of problems, I would say I think I can speak from the chair group that we have seen those kind of problems as well. But I also want to speak to JHD’s point about the benefits of our process that in general it does seem that the overall process we have today works quite well. These problems that we identify in general I would say the points at which these become so problematic that they get escalated to the chair group happens roughly every say 12 to 18 months. So this is not an every day every meeting, every item. The point at which this comes to perhaps you might deem a code of conduct violation and there are all sorts of reasons when something is so outrageous that people want intervention, I think that’s more on the less frequent, that perhaps Michael what we’re trying to discuss here is something that is – + +MLS: I say this I don’t think we had a single blocked decision that I can recall. Maybe we have. But I think we have been following consensus what I call true consensus this meeting. + +RPR: This meeting worked well even in cases where only one person said they’re blocking, we have felt that was representative of a… + +MLS: Yeah. + +RPR: Thank you. + +SYG [via queue]: Can you speak more to "smooth", comparatively to other bodies? + +JHD: I’m not talking about the feelings and sentiments of those in the room. I’m also talking about the quality of the APIs produced. + +SYG: I was wondering about like an occurrence or something, what do you mean by smooth? + +JHD: So I’m looking at the time spent beyond just the development and implementation of a feature but also the adoption, education, and usage of a feature over a long period of time. And I think that JavaScript has a better track record with those things than other bodies that I have seen or experienced. + +SYG: What is another body like that? + +JHD: I don’t want to be too specific about the things that I’m maligning because I’m trying to be diplomatic. But, you know, the favourite example that I already stated publicly in the past there’s a can play audio API on the web that returns the string, probably the string maybe or false [https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/canPlayType]. And I haven’t heard anyone ever defend that API and that’s a tiny example which is not comfortable mentioning. + +SYG: By smooth, you’re judged quality of the output of a committee? + +JHD: Including that. I’m including the smoothness of uptake and usage and understanding of it and sentiment of the community of it over time. In other words, I’m suggesting even though the current process produces frustration and inner personal tension in this room, that while I would welcome a better alternative, I think that it produces—that the trade off is that the world suffers less in return for us suffering a little bit more. + +NRO: MLS already said this, it’s true it’s difficult to block right now. If we require more than one block, then being the first person blocking and second person of support is much easier. I’m not alone in doing that. And my other point is you said no is now and yes is forever. That’s a nice sentence but not true in practice based on how we block things. Very often it happens that proposals get blocked or at least slowed down because they’re missing some feature like next person judgment on what you use it for but the example is branch X and say this proposal need to be rewarded because I need the work. And left the proposal to go ahead and that’s a temporary yes and if you had it now it’s forever. I feel like half of the time we block things it’s for things that actually can be fixed in the future. + +PFC: I think JHD gave the example of being an objector to somebody and receiving messages privately later saying 'thank you for doing that, I agree, but I didn’t want to speak up.' I think when you publicly dissent or you veto something there’s a certain social cost that you pay. I’m not sure that the system we have currently makes that cost be paid by the right people. So I think it’s natural that you do pay a certain social cost. I think that’s correct. You shouldn’t just be able to veto things without any consequences all the time. But right now, let’s say if JHD vetoes something and other people would also like to veto it but don’t speak up, JHD pays the entire social cost and the others kind of get away with it for free. think a proposal like this, that would force those others to also speak up and share the cost, lowers it for everybody but also—like, I don’t see the problem that this would let things get through that shouldn’t get through. I just see it as the social cost would be borne in a fairer way, that’s my opinion. + +MLS: Let me add a comment to that. I think in the proposal is blocked say to the next stage and there’s a lone blocker, the champions are going to go to the blocker and say what do we need to do to change? With what PFC said, now you have more people that may have different concerns that the presenter or the champion is unaware of and now they would be possibly more aware of what they needed to do to modify the proposal moving forward. + +JSL: Yeah. I definitely think the proposal here is good. I would like a little bit more. A lot of times people say I don’t think this should move forward. That’s a statement of opinion. A lot of time we take that and interpret that as a block. I actually think we should make “no, I don’t think this should move forward” as a formal—more of a formal thing, right? You’re putting a motion on the table and then it has to be seconded by somebody. And that has to be something very explicitly stated I don’t think they should move forward. But even when the committee agrees to table this, the expectation from there is the champions will work with the folks that are blocking it and try to work out the solution and if they decide with the champions that there’s no path forward to resolve the block, then it’s considered the block. Then this thing is not moving forward. If they can work it out, the expectation is they will work together to figure it out. As the final stop gap, the chairs should be empowered and to be able to say, we have noted the objection, right, the committee seems to be, you know, not behind that objection, we’re going to move forward any way. There’s problems with every approach. + +MLS: Yes, there is. + +JSL: But I think if we formalize it better than just two, I think we will have a much better result. + +MLS: I agree. My biggest concern is I don’t like the current policy. I think we should have something that has a little bit input from the committee to block things. + +JSL: I will stress this is not unique to the committee. We have this exact same problem in Node.js. If we can solve it here, fantastic. I will take it over there too. + +DE: Just to emphasize a problem that JSL said: it’s often ambiguous whether somebody is blocking. This really happens all the time that someone says “I don’t think something should move forward” and it’s ambiguous whether it was blocked or whether the presenter voluntarily took it back. Even if the chairs intervene and participate in the discussion, it’s ambiguous whether the chairs are mandating a procedural decision, or suggesting something that people voluntarily listen to. This allows us to have certain sorts of standing disagreements about what is actually going on procedurally in the committee. I really like the idea of JSL formalizing it some way or another and encouraging explicitness. + +MM: So I use the queue a bit as an outline to remind me of all the points I want to make. Excuse me for that. I will combine them all. So we are a social process as well as a well thought out process. It’s important to understand and I think this is borne out by my experience on this committee that the systems that enable human cooperation, some of which are formal rules and many of which are social norm, and as far as what are the de jure rules we’re operating under, what are the game theory that follows from the rules and taken from themselves, you’re absolutely right this is unanimity. I think it’s valuable that we don’t label unanimity and that is overall documentation of the how we work does state a rule that is gainfully equal to unanimity and in the context of norms with a different flavor. The world unanimity has denotation and consensus has denotation and connotation. On the connotation we’re close to consensus. On the rule we’re absolutely close to unanimity. Specifically when there’s a lone dissenter, and I’ve been the lone dissenter on a number of occasions as you know, when there’s a lone dissenter, there’s a set of social norms that are very much felt by the lone dissenter, and I have often yielded sometimes enabling the committee to make what are in retrospect obvious mistakes that I wish I had actually not yielded like the override mistake. I was the sole dissenter there and under social pressure, I yielded and I wish I had not. I had been on the other side. I pressured WH to [?] SharedArrayBuffer. He was not. He was the lone dissenter. This is before we knew about Meltdown and Spectre. He has look ahead. We have been saved over and over by him as sole dissenter that everyone was on agreement on. I joined the committee during the worse political days of the committee where essentially everyone on the committee except for Doug Crawford initially as the sole dissenter everyone on the committee had wanted to move forward with EcmaScript 4 including all of the browsers. And then if we had operated on your rules, we would have accepted EcmaScript 4 and building on that and JavaScript would be as useful to the web as ActionScript is. Obviously we have made mistakes on the other side that speak to your side of this thing. But the point I want to emphasize is that the thing that overcomes what seems to be the simple game theory of unanimity is the system of social pressures and what it amounts to is that a strongly felt dissent blocks a weakly felt dissent given good faith operation is often overcome by social pressure and the person yields. + +MLS: Do you want me to talk about that? + +MM: Yes. + +MLS: I agree with what you’re saying. The thing is the social pressure, we have to take into account different personalities that are involved in this social contract. And I would say that you have a strong—you have a willingness to speak up when you think of something is wrong or conversely when you think it’s right and others think it’s wrong. But I think that we have a disparity among those who have initiative in a setting like this. And I would put myself as more of a stronger—I’m willing to give my opinion, right or wrong at times, and so the social contract we have to assume that there will be variability among the willingness to restate positions. + +MM: Absolutely. Variability in willingness to state and variability in willingness to hold to a sole dissent position and block under social pressure to the contrary and that variability is two sources. One is how deep are the genuinely felt good faith technical objections in one’s head whether they can articulate them or not and the other one is to what degree is the person responsive to social pressure? And there’s no way to separate those two. + +MLS: I agree. + +MM: Okay. So another part of the norms, not the rule, part of the norms that come from being a sole dissenter and I’ve seen this again over and over again, is that it’s kind of your responsibility to explain why you’re objecting. And sometimes it can be hard to state, because sometimes they articulate what are felt and turn out to be valid but still it’s a strongly felt social pressure to explain what the objection is because what you’re trying to do is empower—this is another thing that I think is really important. You’re trying to empower the problem-solving ability of the entire committee and especially the champions of who you’re objecting to, to figure out how to move forward by refactoring the proposal in a way that does address your objection. Because the objection is not to solving the problem that the proposal is trying to solve. I’ve never seen that. The objection is how the proposal proposes to solve the problem. And over and over again what happens when there is a sole dissenter that is able to explain why they’re dissenting, not always but this is by far the majority of the cases, is that the problem-solving process is engaged and often the proposal is refactored in such a way, you know, revised in such a way as to meet the objection and the proposal is often better for it. + +MLS: So I would agree with you in the cases where the objection is dealt with. There are—you would agree with me, there are cases where the objection stops the proposal. + +MM: Yes, absolutely. + +MLS: Even though the committee believes that the problem is a problem that does need to be solved and I would stipulate that I think there are cases where we think that the proposal is aimed in the right direction. Maybe not perfect. So I think we need to be careful and be generous to say this give and take is good in all cases, because sometimes – + +MM: So I think that’s again addressed best through norms not by changes of the rules which is it should be everybody involved, especially both the objector and the champions, should be reminded by the overall social system that the objection is an objection to the way in which the problem is solved. I don’t think there’s ever been a case where the objection was to solve the case for a lone dissenter or the objector was to solve the problem at all and the problem solving dialogue should proceed from there. Sometimes it can’t be solved. + +MLS: I disagree that there are times in the past where it can’t be solved. + +MM: I won’t debate whether that happens sometimes. + +USA: I have a point of order. There’s around 7 minutes remaining and a lot of items on the queue. + +MM: Let me just make two more points. One is the browser makers have a defacto veto anyway. And any rule like what you’re proposing does not solve the fact that each browser maker has a unilateral power to veto anyway. I will just mention ShadowRealms and decorators, they’re not—you know, if it were the case that all the browsers but one wanted to do it and one browser maker was saying, no, we will not implement it, the committee would understand it is worse than useless, it is counter productive to move forward to standardize it. So if the browser makers want to go off and have a collusion among themselves as to what they will implement ignoring the wishes of users of JavaScript, they’re free to do that. We can’t stop them. But they should stop pretending that their participating in an open process. Under the rule we have, we have an open standard process that empowers JavaScript users to have by the rules similar power as the de facto veto power that browser makes have. + +MLS: So I don’t think this proposal makes that worse, right? + +MM: Yes. Yes, it makes it much worse. It disempowers the community compared to the browser makers. + +MLS: So the browser makers if two delegates – + +MM: If one browser makers declares we won’t implement it no matter what the rules say, you cannot solve that problem. It’s dead if one major browser maker says “we will not implement it.” + +USA: Maybe go on with the queue. MLS, if you would like to ask for consensus by the end, we should probably also earmark some for that. + +DE: I don’t think that makes much sense. + +USA: I don’t see why. There’s rob and Phillip have responses on the queue, for instance, but there’s – + +NRO: There’s 13 items on the queue. We cannot reach consensus if five minutes about anything of this. + +USA: You meant about that. No, I mean we have four minutes now. We can certainly not resolve. Michael, what would you prefer? Have you finished with your comments? + +MLS: I hit the major ones. One more thing. This is something that I mentioned last time we discussed it but worth reiterating and it feeds into a point that JHD made, any rule system, the first time you propose this how to game it? Nine ways to game it come to mind. Given that any set of rules can be gamed, the real choice we have is if the rule has a pathological outcome because it was gamed, does it feel safe or feel unsafe? Because of this no is temporary and yes is forever, the rule we have got, the only rule that fails safe. And now I’m finished. + +MM: Any chance expanding the time box? + +USA: Good question. I think we are booked for today. But let me ask my co-chairs. Do you think that we could have – + +CDA: It would have to be after lunch. + +DE: We have the whole afternoon currently reserved for break out sessions that I proposed. I wonder if this could continue in a break-out session or we could also make an overflow item which is the whole group? I would be happy with either one. I think this is an important topic to continue discussion on it. + +JSL: We did tell the transcriptionist yesterday we are finishing roughly around noon. + +DE: We should 80%. I would propose break out session or plenary continuation. + +MLS: I want the queue to be heard. + +DE: Could we do that half an hour or hour eating out of the breakout session and everyone can go through the queue items? + +USA: I think we should be able to do that. I think it’s up to you Michael if you would like it to be a breakout session alongside the other one. But I think you can talk to rob in person or us online and figure it out. + +CDA: Is it possible in terms of helping make this decision, are the break-out sessions going to be limited to the in-person attendees? + +DE: No. We will have people attend the break out session. + +NRO: There are four people on Matrix asking for this to be a whole group topic than break-out session topic. + +## Continuation: A unified vision for measure and decimal + +Presenter: Shane Carr (SFC) + +* proposals: [measure](https://github.com/tc39/proposal-measure/), [decimal](https://github.com/tc39/proposal-decimal/) +* [slides](https://docs.google.com/presentation/d/1050DHlNOzcN-8LqJQ_6z8j-LryXgEqOcLfcVzkhJyEk/edit#slide=id.p) + +SFC: So I prepared these slides based on some of the feedback that we have received when we brought this up earlier in the plenary and reviewed this with the champion group and will be presenting this today. So let’s go ahead and get started. First thing, this is a great time to go ahead and do a mini announcement about a delta that we have made based on feedback from primarily EAO and others about the name of this type that was in the presentation yesterday called measure. Amount and why? It is both current and currencies and strongly suggests it is approachable and lightweight. I will be using amount instead of measure for the rest of the presentation. One thing that I feel we missed a little bit about yesterday’s presentation is we weren’t aligned on what is the scope that we’re proposing with the type called amount. I want to talk a little bit about this. I wasn’t at that time prepared to answer. I prepared the answer for it now. Why do we need amounts? Why is it motivated? Why is it important to have? + +SFC: I went ahead and prepared a slide to summarize some of these key points. So one is it represents the thing that many developers frequently have that is a number appear with a unit. By representing this, we can do things like offer useful on and what we can do on the data model the better we can do. The second problem is it fixes some certain specific item or problem that we have because if you take this thing and you use it in multiple different formatters, they all need to know about the identity and the nature of that thing in order to do the correct behavior. I use the run don’t zero problem all the time when I give the presentations. The amount proposal addresses that problem by using the same type of plural rules for example. Three Is that it is a prerequisite for the messageformat proposal medium term. There’s some concern we’re working through. + +SFC: But the messageformat specification recommends that they have this type in the data model because when things get shipped and then formatted, it’s a very common source of bugs that message will say this thing should be displayed in currency USD but all of a sudden when you go some other country and then it’s some other currency, the number gets displayed with the wrong currency and bad things happen. So the messageformat recommending in the data model and the fourth is smart units proposal. I annotated that longer term. We don’t have full agreement on whether this is going to go ahead and land. I wanted to put it here. It is also one of the points of motivation for having a separate type because it means that the smart units proposal will be much more narrow in scope. This reached Stage 1 at Tokyo TC39 [2024-10]. We agreed as a committee that this was a problem space worth exploring. So hopefully that answers some of the questions about amount motivation. + +SFC: Another thing that we didn’t really discuss yesterday in the presentation was about what is amount and what is it actually look like? I drew a strawperson example here. On the left side is what you can currently do with an amount. If you have a value and currency, you have something like—let’s say it comes from some external source. You might have something that looks like this and come JSON object from the server or something like that. And then you plug it into the Intl number and this is what you have to do. You split it apart. The currency goes here and the value goes here. If you have precision, that also has to go here. And then you get the formatted value out and hopefully it works. This is error prone. I have evidence it’s error prone. Hopefully that’s pretty obvious to people in this room. On the right is with amount. You trigger amounts and comes from some external source. Now you have an actual amount object that follows the protocol and then you can pass it into Intl NumberFormat directly. There’s no possibility that currencies and values and units and things get out of sync with each other. So this is what I mean when I talk about amount. + +SFC: I also want to talk a lot about scope. I feel like a lot of misunderstanding yesterday about the scope of what I mean when I’m saying amount. The scope that I’m talking about is it’s a data type that represents the following: Numeric value and the precision of the ville and the dimension which could be a currency or a unit of that measure. And it’s being proposed as opaque type. The exact representation of the numeric value and the precision and currency of the unit are questions that the champions will answer. What this committee needs to know this is in the data model and nature of the data model. The exact way they are represented, the discussion we will have in future meetings. Some of the functions it can have definitely in scope, it should have a from or some type of constructor to be able to build it. Also have a to locale strong to use it for formatting. Maybe in scope the ways to get the value out and maybe an equal function and add subtract, maybe, maybe not. I imagine add subtract that actually not make it in and serialization may or may not. You have to build it and use it and format it with the localized string out of it. + +SFC: What I’m definitely not proposing is unit conversion. We’re saying that amount is way too big in scope. I’m proposing it not with unit conversion. This is a natural place that unit conversion could be added if the future. I’m not proposing unit conversion at this point in time. Another question that was raised that I wanted to discuss a little bit about polymorphic amount versus decimal amount. What I mean here is that an amount could be a type base that makes an arbitrary numeric type and number and decimal and BigInt and carries it with precision and dimension. It could be a type that always uses decimal because in order to interact more nicely with the decimal ecosystem. So my proposal for here is in order to basically not make this observable at this point in time and opaque enough to restrict to decimal semantics and have this flexibility moving forward. So basically try points on this question. Now I want to talk about decimal/amount harmony. + +SFC: This is another question that I don’t think got adequately addressed yesterday. I really wanted to have more time to discuss it. Now I have my time to discuss it about why opportunity space that we have if we think about these proposals together with each other in harmony, what are some opportunities we have that we don’t have if we think about them in silos? So if we think about amount by itself as a silo, I think amount is motivated. It still solves problems by itself. This is the thing you might have. You might have constructor called dot from using the Temporal example. That’s a thing we could discuss later. But you have a dot from function and might take a value with thing with the significant digits like this and then you can use your NumberFormat for it and it will work. That’s fine. In fact, this constructor could work in the harmony mode. Again, harmony you can do on the right and still have the decimal and annotate it with things like precision and annotate it then with your dimension and then what you get the other side is amount that you can use for formatting. It’s very explicit what you’re adding to the data model and when. + +SFC: On the right side is explicit. The first step is you project your number into decimal space and then give it the precision and dimension. I think that Temporal has given us a really great example of how this exact pattern can be quite successful at building very, very clean easy to follow and easy to debug programs by having for example you saw with the date and time and TimeZone and zone to date time. And that sort of thing work well. I think that’s an opportunity we have by thinking about Amount and decimal in harmony. It Tuesdays unified framework for JS to deal with numbers. This is a great opportunity to give developers of the language basically in the same way that Temporal solves or radically improves interaction with dates and times, this is a great way for them to improve the options with numbers. And also puts i18n front and centre that I care most deeply about. By putting the data types front and centre it means two locale strings and do localization out of the gate and not developers to split amounts and different places and so on and so forth and puts it right and centre so the right thing is the right thing. + +SFC: What I want to talk about the next and open up the queue to discuss today is about the motivation for these proposals. I say that if we feel that decimal is motivated, if we also feel that amount is motivated, there’s no reason not to make them work nicely together. This is my position. I think this is pretty—this seems pretty obvious to me that if we think both proposals are individually motivated, we should make them work nicely together. + +SFC: This is other page of notes and I can come back if people have questions. And harmony proposal could introduce namespace and I’m not proposing this one way or the other. And Intl name space? Maybe it could. That is a discussion we can have. Someone on that side of the table raised a question about rationals yesterday. Don’t currently have a plan to support that because of prior art and things, I’d rather embrace decimal as the data model and then the fourth the pack semantics of this. There’s intermediate type on the previous slide. I don’t think that it’s a good use of plenty time to discuss it right now. But it will definitely be something that comes up in the champions meeting. + +SFC: This is the primary way I wanted to spend the time together, the remaining of the time box together, is to answer these two key questions. Is decimal ready for Stage 2? Is the amount proposed in the slide deck ready for Stage 2? We have spec text for decimal. The other decimal champions that worked together for the last year done a great job producing the spec text and it’s quite sound and solid. Amounts does not yet have the spec text. I hope to change that. Not next time we meet. But I want to get these questions. What are the remaining concerns we have about the motivations of these two proposals individually and if we can agree that these are both motivated, then we should look at how we can advance them and make them work nicely together? That’s what I would like to discuss today + +KG: Give the example of uses amounts for formatting for Intl format, NumberFormat, can you talk more about what Amount does, what problem it’s solving? + +SFC: This is the problem it solves. It solves the problem of the motivation. I can go back to this slide. Number 2 is an actual real concrete problem that it solves. In order to do things like reason out the plural form of the amount, you need to be able to know what the—you need to be able to know the entire data model of the amount including the number and precision of the unit and in order to reason of the plural form of the amount. + +KG: I’m not convinced that this warrants a new type. I feel like it would be relatively straight forward to just make NumberFormat accept like an object that has a type and amount property and not like put anything in the language except to change to NumberFormat. + +SFC: You’re advocating for a protocol-only approach and not a type approach? + +KG: Yes. + +EAO: Noting that one of the issues—one of the use cases for decimal in particular is that it is solving is that currently we have decimal libraries and user space and when they go through JavaScript and need to communicate between each other using something like a string to present the Number can be problematic for example because of concatenation and having something like Amount would also support this sort of a thing effectively providing JavaScript providing the way to represent numeric value without necessarily having a way to doing anything with the value which is of course what Decimal does. But the ability to represent a numeric value that is not representable by Number is a thing that Amount provides. + +JHD: I think there’s intrinsic value in having authoritative thing that libraries and user code can use to inter operate with. However, I don’t think that on its own should be enough to motivate any addition to the language. I think that should be a sweet bonus that we get with something else. Otherwise, there’s hundreds and hundreds and hundreds of things we should add to the language. Pretty much any time two widely used libraries share an object—you know, share a data structure, sure, let’s add a new global, a new class and type. I’m slippery sloping it a bit. But I don’t think that needs to be sufficient and I can wait until my other queue item before I say more about it. + +SFC: I will reply to that thread which is this that I don’t buy the slippery slope. There is a problem in the i18n space and there’s an opportunity to solve the problem. I think if there’s cases of common data types with the i18n value those should be representable in the language. I think this is one of the cases. There is a limited number of the cases. Temporal answers a very large percentage of them. This is one of the remaining ones that is not answered by the language in terms of objects that are able to represent things that can be localized. + +NRO: This is not just communication that elaborates but one trying to communicate is built in on the language. That makes the difference of trying to communicate with each other. Already we have part of the official way that it should be done. + +MM: Can you go back to the first slide of decimal and harmony. I missed Tuesday morning. My apologies. I want to understand the withSignificantDigits, the thing that that produces is not simply a decimal. It’s a new type which is decimal together with some kind of precision; is that correct? + +SFC: Yes. I annotated that and calls it for the purpose of the slide decimal with precision. I note also two slides, three slides later, the last bullet point and the exact semantics are not decided whether it should exist or be named. There was some disagreement even among the champions about this. I don’t think this is a Stage 2 blocking concern but definitely something we need to discuss. + +MM: So the thing that my question is, what is it about the notion of precision that’s introduced by the with significant digits that is in some way relative to decimal but not to regular quote unquote numbers? + +SFC: The notion of precision could be used for other numeric types. We had an opportunity. I mean, this is me speaking personally. We have an opportunity with decimal given that decimal the IEEE decimal gives us a way to encode precision in the data model that sets decimal apart from the opportunities that we would have with other numeric types. + +MM: So the notion that’s built into IEEE decimal itself is non-precision, it’s not significant digits, it’s not digits after the decimal point, it’s not error bars, it is number of trailing zeros, only zeros. + +SFC: That’s correct. + +MM: As far as I can tell, I know of no use cases for which that is useful. + +SFC: That is useful. + +MM: Instead of 1.0 if the actual numeric valueOf 1.11111, you would render it out all to all available—as deep as was needed to correspond to the underlying precision of the finest precision of the underline representation? + +SFC: I would like to hear Nicolo’s response. + +NRO: Whether you store number of digits or number of significant digits or zeros, regardless that the model represents, they are all equivalent. You can convert them with the—Regardless of whether you store the significant number of digits or number of digits after the dot or the number of zeros, they’re all equivalent representation of the same concept. You can convert between them just based on whatever you’re storing and the valueOf the number. + +MM: Are you suggesting that our decimal precision that it is, for example, number of significant digits or numbers after the decimal point and then we’re enabling an implementation trick and we’re using what IEEE-754 considers to be the number of trailing zeros we’re reinterpreting that aspect of the underlying representation not to mean number of trailing zeros but instead to mean number of significant digits or something? + +NRO: When I convert it, not just like if you need IEEE, there are three trailing zeros here, we can say there are six significant digits, for example. + +MM: And then the rendering that we would do is to extract that from the underlying decimal representation and then interpret it as number of significant digits. + +NRO: Yes. + +MM: And then render it that way rather than rendering it according to the IEEE? + +NRO: Yes. + +MM: That’s interesting. That’s the first justification for this that I heard that makes sense to me. Thank you. + +JHD: So the question I pose is what functionality does amount provide beyond being a built in container for multiple somewhat related values? Waiting, it occurred to me that perhaps an alternative name for this that will simultaneously convey my skepticism and its semantics is Intl NumberFormat options bag factory. It seems like that’s all this is, is it’s just a class for the purposes of wrapping an options bag to pass a NumberFormat. That doesn’t feel sufficient to me. Does it do more stuff that I’m missing besides that and providing an interop point. + +[Slide: https://docs.google.com/presentation/d/1050DHlNOzcN-8LqJQ_6z8j-LryXgEqOcLfcVzkhJyEk/edit#slide=id.g3316773b416_0_5] + +SFC: I’m putting up this scope slide because what I would like amount to become is this thing that also does these other thing that are listed under maybe and future. I’ll remove those from the proposal for now in order to seek consensus given there was skepticism about things like unit conversion being in there. Seems like removing unit conversion make some delegates why do we need this any way? And it’s an interesting point. I’m glad we’re discussing it. There’s a really good opportunity here to have things like serialization of these values, I think that’s quite compelling to have the equality of these values. Not just a NumberFormat factoring, it’s all Intl. Any object that can operate on these, not just NumberFormat. + +USA: Quick point of order. Shane there is around five minutes left. + +SFC: We got started about two minutes late. I would appreciate the extra two minutes if possible. + +USA: Okay. + +DLM: Just like to second JHD’s comment. I agree that what’s being proposed is a potential solution for problems with Intl NumberFormat and messageformat. But I think it’s the only solution. And I would encourage us to investigate other options as well that might not be quite as heavy-handed. + +SFC: Are there any ideas that you have, any specific thing that would be less heavy-handed? + +DLM: I think as JHD mentioned, an options bag would be one solution for Intl NumberFormat use case. + +EAO: This is a little bit more meta. Given that the proposal formally only has one champion who is on leave, and it’s being worked forward within a larger group, I was thinking it might help a bit with a lot of shaping of this if at least Shane and possibly myself could be eventually recognized as champions of this proposal. My interest here is indeed the “opaque amount” level of defining this and further work from there ought to follow on in separate proposals. + +NRO: I think it’s very—with BAN away for a while, we should have different champions. Like to have EOA and Shane and he will have more time. Have the proposal and talk with people about it. I would like it straight from you Shane. + +SFC: I think it would be great for—if Jesse had more time. I definitely see myself as an adviser and put together slides. Point of order. We need note takers. + +DE: This is good. I’m glad we’re getting more support for this proposal. Just to note in general, you don’t need to ask for anybody’s approval for add or remove and champion group can do this. Happy to have the people working on this. + +MF: So it seems like amount is supposed to be covering this really broad association of a unit with some numeric value. Decimal is—I’m supportive of that proposal for the scope it’s supposed to address, but it is not so general that all values with units would be representable in decimal. KG brought up yesterday that it is common for non decimal rational values to have units and be displayed in that way. And you simply cannot store a third as a decimal of 0.3 repeating and have that be the same thing. These should be pursued separately and motivated separately even if they would be asked to work nicely with each other. But I don’t think that we should be limiting Amount to just like Decimals in this way. + +USA: That was your queue. + +SFC: I just want to give anyone else the opportunity to jump on the queue. There were two questions I was asking and one was about decimal and the other on amount. And we focused on the amount that was the newer topic that makes sense if you spend a lot of time there. I want to give anyone else the opportunity to jump on the queue for that. + +SFC: Regarding while—if people are deciding to get on the queue, responding to the point about Rational, I hear you, I think a letter design is able to have Rational support here. I also think that the problem is not—doesn’t have a lot of prior art. And that’s not what I’m proposing at this time. We can talk more about that offline. + +JSL: The way motivation for amount is worded here in the problems to solve, I think decimal definitely feels very well motivated for language amount. To me right now is way more appropriate motivated for Intl and discussions there. Just kind of where I’m feeling right now based on the comment. + +SFC: This was useful discussion. I think we’re just about out of time. And the champions will definitely continue to, you know, explore can this work with the protocol only approach and then consider it. Okay. + +USA: Thanks Shane and everybody else for participating in the discussion. We cut it very close. That’s great. + +### Speaker's Summary of Key Points + +* Some concerns about Amount being motivated if its only use case is Intl +* Requests to explore a protocol-based approach +* Question involving the representation of precision +* No delegate raised concerns about Decimal motivation + +## Continuation: `Number.isSafeNumeric` + +Presenter: ZiJian Liu (LIU) + +* [proposal](https://github.com/Lxxyx/proposal-number-is-safe-numeric/issues/4) +* [slides](https://docs.google.com/presentation/d/1Noxi5L0jnikYce1h7X67FnjMUbkBQAcMDNafkM7bF4A/edit) + +LIU: Yes. I’m going to start. Here is the problem statement for `Number.isSafeNumeric`. Just before last presentation, I received a lot of feedback and thanks to everyone. Here is a progress statement. The first slide has changes from the last presentation. Here are five changes we made. The first is clarify the motivation and the real problem. The second we remove strict format rules by default and align with ECMAScript and StringNumericLiteral format. The first is to remove `Number.MAX_SAFE_INTEGER` limits for value safety and add identification for unsafe numeric string and we have more questions for changes and feedbacks to look at GitHub issues. + +LIU: Here I will start the motivation part. Currently we have focus on real motivation. It’s the string to number conversation may lose its original precision and integrity. Most developers are not aware of this problem because this problem exists for stack overflow and can represent everywhere. I think this is a potential risk for apps. And third is no reliable method to detect precision loss. Just compare with string value is affect by different problem. So we think we should provide a built in method and help developers to avoid this problem earlier and choose right parsing method. And the problem we are facing: The first is cross system value mismatch. In Alibaba built a mobile api gateway called MTOP calling HTTP API need use JS SDK and opener and go through after back ends and we have 100,000+ API and 200,000 backend serving and more than 1 billion calls per day. The problem is Java has the Long type whose numeric range may exceed what JavaScript Number can represent. And we have to convert all numbers from the back end to strings in the gateway and we have numeric value but in gate way we have to transform in the string due to the case. And in JavaScript every developer need toString number conversation manually and the bugs produced by precision loss happens every day. So every day I receive many new questions just for this value and error number to back end or whatever happens. So it’s the problem we’re facing every day. + +LIU: And the second is sheets use decimal.js everywhere. DingTalk—you can think of them like Google Sheets—allow users to create table and numeric values stored as strings. And when they’re displayed to user or doing subsequent operations, like formula calculations, engineering teams need to do string-to-number conversation. Just because the string to number conversation may be precision loss, so the DingTalk sheets engineering team have to use decimal.js everywhere. For viewing the table, decimal.js add extra JavaScript bundle size and slow down for the first screen performance because we have to—we must load decimal.js first. So if a `Number.isSafeNumeric` method exist, for many cases, decimal.js is optional and dynamically. + +LIU: The definition of `Number.isSafeNumeric` is not updated. For ECMAScript StringNumericLiteral format. For 123 and leading decimal points or trailing decimal points are accepted and the invalid for null and undefined and some format. So I think this makes it easier to not write duplicate code. + +LIU: The next is value safety. The update validates the real-number value of the numeric string, and retains its original precision and integrity after being converted to a JavaScript number. Here I just list some examples. For 123 or numbers fall and max integer and smaller and max integer converted to JavaScript number and then using its original precision. Just for some floating numbers. And the examples, it’s where I tried to convert string to number, the numeric type is changed. This is invalid case. + +LIU: And I just updated identification for how to define unsafe. In ECMAScript the number toString for nodes and if X is any number value other than negative zero, then `ToNumber(str)` is `x`. List definition if `x` is JavaScript number must be numeric string generated by toString. The `ToNumber(str)` must be `x`. And in our definition, what is unsafe? Unsafe means if it is numeric string and `ToNumber(str)` is code due to IEEE-754 double precision limits and ECMAScript round result, the significant digits of `str` are modified in conversation, it means `str` doesn’t have an exact representation of `x` due to modified significant digits. It can be converted back to the name numeric string that means unsafe. Here is a formula that I had before. It is just the formula and not implementation. + +LIU: And also there is waiting for discussions. Better name for `Number.isSafeNumeric`? Currently we call `Number.isSafeNumeric` and we see the list can be represented with a better name. But also some people may make a better new name for the same behavior. And a new name like after double parse double input. This is waiting for discussion. That’s all. Any questions?. + +USA: There is a long queue. Unfortunately not a whole lot of time left. But then I don’t know, if we can do a continuation of a continuation. But anyway, let’s start with the queue in the meantime. First there is NRO. + +NRO: Thanks for presenting this again. I find motivation clear now. I believe the motivation is not actually about how numbers are represented as a float but just whether string contain a number is still not round trip and mean to humans when going through the float which I believe is also probably answers KG’s question which is specifically about how floats internally are represented. I have a question for you that is do you think if you had Decimal as a built-in in the language, this proposal, would it still be useful? To be clear, I don’t think we should clock any proposal going for Stage 1 based on another Stage 1 proposal? If Decimal proceeds, do you feel like this proposal would still be motivated? + +LIU: Yes. I think even if Decimal is still useful because number is numeric just for the purpose of number is numeric is going to validation and when is written, it means you should choose a better parsing method. For you maybe BigInt and maybe use decimal.JS or proposal decimal and proposing decimal can solve almost the questions about that. But the validation method is simple and easier to check if a numeric string is valid, is still necessary. + +NRO: My question is because I was thinking you would just not use floats. You would always use decimals if the numbers are not floats. I have a second topic in the queue for the committee. I heard yesterday some interest from delegates giving a presentation to how floats work. When we discuss this topic, we find we talk past each other because we have a different understanding how floats are. I’m not volunteering for this. I strongly encourage someone to volunteer to prepare this presentation. + +KG: So in your examples, you didn’t include any strings which represent the exact decimal value of a float. For example, you have on the left 123.5678. The exact decimal number of the JavaScript number is some 40 or 50 digit abomination. If I wrote out that 50-digit number and 123.5678000079, whatever it is, would that string be accepted? + +LIU: I know your problem. You mean if a numeric string contains more than 40, 50 digits, it can be accepted? + +KG: Not just like a general string, but if the string is in fact the exact decimal representation of a JavaScript number, not the representation that you would see when you call toString by the exact decimal representation of the floating point number that JavaScript actually has internally, should that string be accepted? + +LIU: I think – + +NRO: I think I have an answer. And I think the answer is not the string should be rejected because then when you—the difficult conversion of the number to the string does not give you something that a human would read and say, okay, this is exactly the specific float value that was represented in binary format. + +KG: So the point is not just that when you parse it to a number, it is still represents the same value, but when you parse it to a number and then serialize back to the string, the resulting string represents the same value? + +SFC: Next on the queue. I discussed this a bit to pin down a little bit more exactly what the formula is. I made an issue https://github.com/Lxxyx/proposal-number-is-safe-numeric/issues/3 with the suggestion on the proposal repo that you can look at off line. + +KG: Can you put a link in the Matrix? + +SFC: I can put it in the Matrix. + +PFC: Thank you very much for coming up with the problem statement and illustrated use cases. I found that very clear. I support Stage 1. I had some suggestions but you can skip the rest of my topic. I will post those on the issue tracker. + +SYG: So the spreadsheet use case seems strange to me in particular the use case of if you can represent an input string as a float, then as you use it as a representation. But does that mean that you are also—like, it seems to imply to me that if your initial input happens to be representability as a float64, then you are opting into IEEE and operating on the number and, if it is not representable, you are opting into decimal arithmetic. Those are different worlds and seems world to me to make that decision on the initial input string. Like, why do that? + +LIU: Because when last year we proposed Decimal, this is my first time to participate in TC39, the problem that I bring from DingTalk sheets is that I want to use Decimal to solve the decimal.js problem. After one year they have new—they found something that can be easily solved by number and numeric. And when you use numeric string, maybe decimal or maybe any other. So we think if we can replace some simple operations by just using number or number type or just JavaScript or types, it don’t rely on decimal.js, it’s a benefit for all system of capture and for the server proposed decimal comes to we write the code to use the proposed Decimal for less JavaScript on the side. So I mean it’s a progressive so you always choose the new technology that can help run better code. + +SYG: What I’m saying is if you know the representation is represent—if the representation is safe, why is `Number.isSafeNumeric`—it doesn’t say anything about the operations and arithmetic and other formulas you want to do on the number later, you could still accumulate errors if you keep doing IEEE 754 arithmetic. Why is it okay to—it seems like a different kind of—like, why did the engineering team decide that it’s not safe everywhere, right? It doesn’t give you the safety property it seems like you actually want. + +LIU: The engineer team said if the `Number.isSafeNumeric` method exists, it does exist, choose the problem. For many cases, let’s say it’s just like read only state or just read table or just shows on static or anything, decimal.js is optional and for some formula calculations or precision loss, they can load in decimal.js dynamically when they need it. + +SYG: How do they know for particular operations if they need to load decimal.js or not? + +LIU: Just for any excel, it has a define of precise sharing loss problem and means if the double is 15 significant digits are called precision loss. In this case, a number of 15 significant digits, they choose to use decimal.js and they use it for simply dividing it. + +SYG: I will drop it there. I find it unconvincing, the safety stops just at the parsing. There is no kind of transitive safety property. Seems like the wrong kind of architecture. I would not recommend `Number.isSafeNumeric` for this use case. For storing and serializing JavaScript numbers I would recommend [?] but in this case I find it strange. + +LIU: This case is just people who do not—this is just do not want to log extra library and they like to display—they don’t like to display wrong values. So if there is a list, this helps a lot. But if you mean errors, this must be solved by next proposed decimal or use some that is more library. This is engineering feedback. + +KKL: Thank you to SYG for drilling in on that. In particular I agree if the range of expressibility values for a particular number are not—that if you do not really have a choice of whether to use the decimal if the range of expressible values can only be captured by decimal, I don’t think it is sound engineering. For my experience Uber has a JavaScript API gateway that received traffic and disseminates to Java, Go, and Python services and runs to—I’m extremely sympathetic with my experience to the problem that you’re having, I think that if I were to try to capture my own understanding of what the problem statement is in that domain, it would be that JavaScript—because it does not have identical numeric types to these other languages, that it is sometimes—it is necessary in many cases to resort to using the string to capture for example on 64 byte integer and datetime stamps and nanosecond-resolution time stamps and of course decimal as well. But I think that I would say that a solution to this problem would be of the form of JavaScript APIs for recognizing whether a string can be safely captured in a corresponding JavaScript type or returned to the string format for that type. I would expect a solution in that space to not be a single `Number.isSafeNumeric` method but a range of methods pertaining to specific numeric domains specifically strings that capture int64 strings, and strings that capture decimal, strings that are—but not strings that capture float64 since JSON can handle that particular case fine. So my hypothesis is that the problem statement is that we need to find a way to improve JavaScript’s ability to recognize value ranges like int64 and decimal that don’t have native representations so that applications have a more clear APIs for interacting with those values without loss of precision when translating them in and out of local representations and I submit to you that that might be something that we could make progress on in Stage 1. + +LIU: Yes, thank you. I already consider with the proposal, I think maybe a group of method can help but I just choose one method because I do not know if a set of methods is too much or we should just use this. So I just brought one method and thank you for the feedback. I know how to do it next. + +USA: Next we have a response from SYG where he says I find Chris’ serialization use case compelling. That was the entire queue. Would you like to ask for consensus again more formally? Let’s give a minute. I think we already heard a few folks mention that they were—happy to support Stage 1. But just to be clear, let’s – + +GCL: I do not feel comfortable with Stage 1 at this time. It seems like there’s still a lot of unanswered questions about what the motivation here is. I think what Chris suggested is interested but is a fundamentally different proposal. And there’s also an active issue for this proposal about the motivation that is still going and I would like to see that go somewhere before we move further with this. + +USA: I’m sorry. Who is that? Could you add yourself to the queue? + +GCL: This is GCL. + +USA: Apologies. On the queue, we have Shane, then. + +SFC: I still think this is strongly motivated for Stage 1. And I think that the presentation illustrates some of those points. I think being able to reason about what is safe to serialize back and forth between the different data types is definitely a problem I experienced and a problem that I have seen others give themselves on and perhaps don’t necessarily agree what is a safe number and ask “please explain more”—means it is a complicated problem. The problem space is fairly clear, you know, the language needs—maybe the language needs some mechanism available in order to make this determination and the language currently doesn’t have a mechanism and if it does, we can support Stage 1. I definitely think this is the problem space is motivated. + +USA: Next on the queue, we have a response from Ken. + +KG: Shane, if you think it is not motivated, can you state what it is. I am still struggling to understand what it is. + +SFC: Absolutely. You have a numeric like thing that is represented of a string and take the numeric like thing representation of a number and want to do so without losing any precision of the string and determine if it is safe to do that given that the space of numbers that represent both the number and smaller in the space than is represented more in the string. + +KG: Does it mean to lose precision? + +SFC: The value as projected in the decimal space changes across the operation. + +KG: Rejecting 0.1. + +SFC: 0.1 would be retained because projected the number back in the string space retains the original valueOf 0.1. + +KG: Not representing it as the number loses precision but representing the resulting number as a string as precision? + +SFC: Mostly, yeah. I wrote in an issue about the exact like more formal definition of that, but yes. + +KG: Okay. I’m okay going forward with the problem statement of “I want to know if the string can round trip—can preserve its mathematical value round tripping through the number[?]”. That seems like a reasonable problem statement. + +MF: That was the first—that description right there is the first time that I understood a problem statement for this proposal. It’s possible that that would make me okay with going to Stage 1 for it. I would like to see why it’s useful to know why you don’t like to change the mathematical value like a string representation of a float that’s derived from the string. But if we could do that, then I would be okay with Stage 1. Sorry that this was weird, but my opinion is changing on the spot. + +SYG: I’m unconvinced for Stage 1 at this time. I think we moved pretty far from the initial—like, we zoom out, I think we have moved out from the initial motivation from LIU’s presentation yesterday about validation and about this. And I feel more unconvinced because one of the motivations here is this unsound architecture for doing arithmetic and numerical operations on decimals or floats. So I want us to have a tighter formulation here. Like, I think as a committee we have backformed something that we can understand and sounds reasonable to convince ourselves that we can go to Stage 1. I’m not at all convinced. That’s the problem that LIU is trying to solve and I don’t want to explore something that is the problem that they don’t want to solve and give them a thing to solve in an unsolved way that I’m worried about with the spreadsheet use case. + +MLS: I have similar concerns. The motivating example here was input validation and it’s not going to help with the—you input it correctly, but it doesn’t help with further calculations. KRIS did talk about being able to send data between applications, and I think there could be the use for that. But I think we could go to Stage 1 but I think there needs to be a lot more work as to why this is—should be. + +NRO: I think the motivation is not clear. And I don’t think it changed since yesterday exactly the example with mathematical variables in the slides yesterday. Maybe to help the committee would be useful to have more examples how to use this. Like, actual code examples of maybe having in your software where you show what it would need to do if the check passes this other thing and then you leave knowing what the other thing would do would help with communication in this proposal.. But I’m finding it motivating enough for Stage 1. + +SFC: To add to the question of why would you want to do this: I mean, the simple answer is that f64s are more compact in general than strings. And often times when you’re storing in a database or something or sending over the wire, you want to send something as an f64 because it is a more compact representation. You want to be able to verify that the decimal value in your string is able to be round tripped through the compact floating point of view. I think that that is why you run the operation. I think the operation is sound and some problems need to solve and that’s why the operation matters. + +SFC: I think I’m next on the queue. I think someone asked earlier doesn’t Decimal solve this? I think Decimal does maybe solve it. I think Decimal still does have a limited precision but it is able to represent—it could be considered maybe a better vehicle than f64 when you’re trying to serialize the strings to numeric type and might want to reach for decimal instead. I think there’s still room to explore how this problem could be solved in a Decimal state. I still think the problem space is motivated enough for an exploration phase for Stage 1. + +MF: I’m now starting to understand this proposal is more about representation of a float as a string to a user. It seems like then this proposal will bring into scope some of the looseness we have about that representation currently. Assuming that we can fully define that space, that should be okay. It is also a bit weird that we sometimes represent floats not in decimal notation but scientific notation. That’s an arbitrary decision because we chose a certain number of digits that we thought would be okay like 30 years ago or whatever. And that really has nothing to do with this. That’s kind of a bit weird. I think it will be a stumbling point for this proposal. That would be stuff that could be investigated during Stage 1. At the moment I’m not opposed to Stage 1. + +USA: Oh, you aren’t. I believe there’s still people who are opposed to Stage 1? Can we clarify that to see if there’s any path forward for Stage 1 in this meeting. + +USA: Yes. GCL you’re on the queue. + +GCL: I think sort of like MLS, I have heard some alternative problem statements that I think make a lot more sense to me than what has been presented so far. And if we were to iterate on those before coming back to make that what the Stage 1 consensus is asking for, that would be probably—like, I think I could see that being acceptable. + +MSL: I’m not going to block Stage 1, but I think the motivation here is fairly thin. I think there’s some issues with what this API wants to promise. But I think that the bar for Stage 2 is going to be much higher unless there’s significant change, I don’t see it advancing past Stage 2. + +SYG: I’m still uncomfortable with Stage 1. Like I said previously I find KKL’s motivation in the problem statement clear and compelling and happy to explore that. But I don’t want us to give Stage 1 to a proposal by coming up with a problem statement that we came up with for the champion. Like, if that is—if we independently reach that point, that feels like a different proposal to me. We should do that. And go through Stage 1 for that proposal instead of saying, “Oh, actually this proposal’s problem statement is this thing,” and then we advance Stage 1 for that. + +USA: I see. So to be clear, SYG, would you withhold consensus for Stage 1 at this moment? + +SYG: I would. + +USA: Okay. All right, then, we don’t have consensus for stage advancement, but for the next time this comes to the committee, I would implore the champions to engage with every one that participated today and others in the committee and I think you could—you heard a lot of statements of support. So I think this could go to Stage 1 at a later meeting. Thank you LIU. + +CDA: I was on the queue with a reply quickly. So I just wanted to state for LIU that it sounds like there is a path and folks are uncomfortable because they’re not seeing the unified vision of what the problem statement is, and so if you guys could nail that down between now and, you know, next plenary, then there’s potentially a path forward for this to advance to Stage 1. + +LIU: Thank you, everyone. + +USA: Thank you. Would you like to go to the notes and add a summary of the discussion that happened earlier. + +LIU: Yes, thank you. + +RPR: Specifically, maybe GCL and MSL might be able to contribute to the consensus summary. + +### Speaker's Summary of Key Points + +* Clarify the motivation and real problem +* Upgrade for value safety definition + +### Conclusion + +* withhold consensus for Stage 1 + +## Language design goal for consensus: Things should layer + +Presenter: Daniel Ehrenberg (DE) + +* [slides](https://docs.google.com/presentation/d/1Nj6E1h0SeyDGI3e8BQlATQeX-l6x4Jx7uGAM8XimfIM/edit#slide=id.g329dc435965_0_344) + +DE: I want to talk about a potential language design goal which is that things should link. Here on the slides is a beautiful layer cake to illustrate that concept. The idea here is to bring language design goals explicitly to the committee for consensus so that we can establish a kind of a shared basis for doing design. This is something that YST proposed that we do some years ago. I think it’s a great idea. + +DE: Concretely, the proposed idea here is that things should layer. You have sugar on top of core layering capabilities and layering, and most features are syntactic sugar. It could be a transpiler or npm module and that is layered on top and some are known and can be layered on top otherwise. + +DE: I’m not talking about the JSSugar/JS0 made in previous meetings. That’s a separate conversation. I’m just talking about the single language JavaScript/Ecmascript that we currently define in TC39 should have a layering within it. More like a logical editorial layering rather than necessarily two languages. That’s a separate conversation. But still, it was good that that was raised because that gets at some of the underlying design points that are important to discuss regardless of whether it’s one language or two. + +DE: So a question is, when should capabilities be added? I think the answer is when the capability is really the goal of the proposal. So an example is Temporal. Temporal adds two capabilities. It adds this higher precision access to `Temporal.Now`, the current datetime, it also adds access to the TimeZone database that the browser has, that JavaScript has. But most of Temporal could’ve been implemented as a library, as “sugar”, without any new capabilities. So Temporal has both sugar components and underlying capability components. + +DE: Another example of capability that is maybe a little bit ambiguous is the temporal dead zone (TDZ) where `let` and `const`, the variable that defines throws and access before the definition is reached. This implies a new capability that is implicitly to efficiently perform this check which actually no one consistently succeeds at. When this feature was being designed it was kind of assumed that it would be possible for engines to optimize it out and we previously heard a presentation by SYG about the possibility of eliminating these checks or at least in certain cases. Is this a core goal of let and const features? I’m not really sure. I think the lexical scoping part might be more core, but the question of whether TDZ is core could go either way. + +DE: There are other cases where capabilities would really be pretty accidental. And one of these, again, I’m stretching the meaning of the term “capability”, but this is kind of core to the argument, `Map.prototype.getOrInsertComputed`. This was a proposal from KG that this is coupled with the check to make sure that things didn’t go wrong with the structure during the call back. And effectively, even though this could be polyfilled, it requires kind of taking over things. It’s kind of a capability. We decided, no, this is not worth it. It’s not the core goal of the proposal and adds extra complexity. + +DE: The other one I would call out is match where the match proposal includes currently a caching mechanism to make sure that properties and iterators aren’t read multiple times in multiple match statements that implies a new engine capability to magically make this sufficient. So it implies some capability to not actually create the map, but still do the optimization. + +DE: So when someone expects the JIT to have a new ability to make things fast, that corresponds to a capability. But engines just aren’t magic. I’d actually say that build tools and bytecode interpreters have similar constraints and similar optimization capabilities in the general case. Sometimes they’re able to optimize things, but you really don’t want to have to rely on them. + +DE: They’re not magic. Both systems aim for spec conformance. So some limits of build tools although some build tools can operate on the whole program, many of them operate on a per file basis. So they don’t have access to, you know, cross-module analysis. Often they’re working just within a particular function. But not always. Semantics of build tools are simple and deterministic. The semantics that they ascribe to JavaScript at least when you want to have the things that are supported across everything. When there are optimizations, they are mostly local and about preserving semantics, not creating—not changing the—not giving the statements meaning. Also meaning build tools are poorly funded, so it would be difficult for them to maintain a higher degree of complexity. So they have to operate at this simpler local level and they have to conform to the semantics. Bytecode interpreters need to do the same thing. + +DE: JavaScript engines these days, at least the ones in web browsers, tend to be based on bytecode interpreters. There are some JavaScript implementations that are not of that form, but at least this is one environment that we have to make sure that the language works well in. So generation of bytecode is file-by-file or function-by-function. So it’s somewhat fine-grained. You know, pre-parsing makes it finer grains. It also cannot rely on this broader analysis. Even within that unit of granularity, it has to be fast and simple when you’re doing bytecode you don’t know with it and get complex analysis and have further executions and have them trigger JIT. And to get semantics it has to be possible to do that locally. Bytecode interpreters need to support all of the language, and we don’t want to go to some complex tier because some different language was used. Another reason that simplicity is important because more bytecodes mean more complexity downstream of the JIT and just more things to implement. This implies to me that syntax features should, when possible, desugar easily into the efficient JS and not rely on intelligence from either build tools or bytecode interpreters. + +DE: And this leads the two possible statements for consensus; encouraging that things should layer and when capabilities are not the primary goal, we should do things that can be implemented in terms of other things. So for libraries, this is one possible wording that I haven’t wordsmithed it much and would be interested in your input. For libraries, you know, library features should by default be implementable accurately in JavaScript given the assumption of an original built-in environment, unless the goal is new capability. If a new capability is exposed this should be deliberate and well-understood. For syntax, by default should be expressible via desugaring into existing JavaScript syntax features and are completely accurate. And where desugaring is not possible we should understand that the benefit of this aspect of the semantics is worth its cost in terms of complexity for developer mental model and implementations. So I think we’ve mostly been designing in alignment with these principles but somehow it’s felt a little bit out-of-scope to argue for them directly. + +DE: Sometimes, discussions in TC39 proceed with the understanding that we shouldn’t spend too much thinking about the tooling implementations because later there will be the native implementations. We’ve been using that argument for a while. In the JSSugar and JS0 implementation that was flipped on its head where it was raised that, you know, we shouldn’t put things in native engines because tooling could potentially do much more complex and advanced things. I don’t think either of these are true. We should go for features simple and possible and simple would be they would layer on top of other things. So thank you. + +SYG: As a quibble for the previous slide or slide 6 to think about bytecode, you know my position on this, but I want to highlight for the room that from my point of view, the limits of—I agree with your characterization of the limits of bytecode interpreters that even the JITs and certainly bytecode interpreters are not magic, and from my point of view they are externally imposed by basically performance incentives that all the browsers want to be fast in particular want to be fast in loading. Web pages because Web pages are these ephemeral things and not long-running applications and exceptions exist, of course. Because of that, everything has to be—anything we do, any optimizations we do have to pay for themselves. If they don’t pay for themselves end-to-end to some naive parsing and execution, why would you optimize it? + +SYG: That throws a whole class of optimizations and analyses out the window because of the pressure to compete on loading performance or else we lose users, et cetera, et cetera. Some of that I agree totally also applies to tools. A lot compete on the building, the actual running performance of the tools themselves. I hear people complain about some bundlers being slower than others and that’s the reason to switch to another bundler. It feels to me that that space is more open on the—in the ahead-of-time tooling space. + +SYG: It is not externally imposed and tools could have a different goal of trading the performance of running itself with generating better code, smaller code, more optimized code or something. I understand that is not a space that a lot of the tools in the JS tooling space compete on, but it doesn’t seem like there’s the same kind of external pressure. If nothing else, this is what I see in every other AOT language space, right, that there’s a reason why a clang have and GCC has O1, O2 and O3 and use them for different use cases and not always competing on generating code the fastest. Sometimes you want to take the time to generate the most efficient code. Whereas we never really have that full luxury in the browser engine. + +DE: It’s not always about execution time. There’s also the complexity of building and maintaining these systems and the need to compose them. I think we should consult the authors and maintainer of these tools when understanding what they could do in the future or someone has a new tooling effort that they’re going to spit out from different groups. We could work with them. These tools that are more advanced in the way that you’re alluding to, until they start existing, we should maybe considering the existing things. Regardless, I argue we shouldn’t just add features that work for either of these two cases. We could continue the queue. Somebody running the queue or am I supposed to be running through it? + +NRO: Just every time I hear people mention how much JS code help, it is very difficult to do that because of JavaScript. People with browsers know this. You have the JIT and just assume this function will take a number. But then you need to bail out in some cases and go back to like the original bytecode. And cools cannot just bail out because the code is there, they cannot just load some different version of the code which means that the reason there are no tools available to these advanced optimizations even though people tried like I’m thinking of the—it’s not possible because of that unless you restrict what JavaScript your user can do. But it is some subset of it. + +DE: Maybe the Closure compiler is another example of – + +NRO: Closure has a lot of restrictions of what JavaScript can provide.. + +SYG: As a quick response to that, it is true that something typed or directed is generally infeasible as AOT optimization in JS but there are like we have seen in the innovation in the AOT optimization space that are impactful like tree shaking. That is not something that engines can do. + +DE: Tree shaking is great and an example of optimization that doesn’t change semantics. That’s what I have as my third point under the limits of build tools. If we add language features where to get the right semantics, you need to do some advanced analysis, that’s completely different from an optimization that doesn’t change semantics, that you do some more optional analysis on. + +ACE: I think tree shaking is great because it shows the power of when parts of the language are static where that really helps tooling. Yes, tools can also tree shake common JS but generally a lot of the offers enjoy tree shaking and more because there’s just not more static guarantees similar to word presenting the records and tuples, like syntax there was providing static guarantees. I think that’s very different. So I think not to pick on a proposal, but to pick on one, when we talked about pattern matching or any of those that add symbol protocols, symbol protocols are kind of the opposite of that in that even if you can see the class and see the adds the symbol, you don’t know if that method will be monkey batched with the completely different optimization unless you can be sure that the prototype is frozen, that there’s no static. It may be difficult that the prototype is frozen. I think there’s very—we’re not doing things in general, but there’s a big difference between the static parts of the language and the dynamic parts when it comes to tooling. + +DE: Let’s have more statically analyzable things when it works out for the design of the thing that we’re working on. That’s another possible goal that we could document. + +JSC: Just a quick question on the last slide’s statements for consensus for libraries. Scope-wise, the library features, you’re talking about standard built-in? + +DE: Yeah, sorry. That’s referring to the built-in functions and classes in ECMA-262. + +KG: So for the syntax statement, I have I guess two quibbles. The first that I’m not at all sure by default syntax should be sugar. I think that actually exposing new capabilities is one of the best reasons to add syntax. I’d like the bar for adding new syntax to be pretty high and sugar doesn’t usually meet it whereas new capabilities are most likely to meet the bar for being worth doing. So I’m not sure at all that I like to say that default—we should by default assume that syntax features should be sugar. A second quibble is that for the second sentence there, I think even things that are desugarrable, we ought to understand them to have cost in terms of complexity for the developer mental model and to some extent implementations and that is true whether or not it exposes a new capability like JSSugar. + +NRO: When we talk about syntax sugar, do we consider for example the using proposal or – + +DE: We consider that to be just – + +NRO: That’s an example of something that is very easy to transpile. + +DE: So I think there’s some kind of intersection of the statement that you’re making and the statement that I’m making that would be valid. Basically when we add new features, they shouldn’t – we should not add new features with random edge cases that make it harder to desugar that doesn’t have the case for that. That’s what I’m trying to say and don’t need to affirmatively state either way ant the valueOf sugar features. + +KG: That sounds good to me. + +DE: And using is tiny new HTZ and make it better, I would say no. + +KG: It does. Have you been following the discussion about classes and switch statements? + +DE: I wasn’t sure how to treat that in this presentation so I didn’t mention it, the details. That’s why I mention either past or future things. Let’s consider that to be a valid counterargument to those edge case semantics. It shouldn’t be that we say, well, you know, it can be done correctly in an engine and it’s fine. It’s an advantage if it’s more easily desugarrable. + +KG: Yes. I definitely am willing to sign on a to statement for the future whose primarily end is not the introduction of a new capability it is best if it is pure sugar instead of sugar with some edge cases + +DE: Great. So I think this leads to a clean refactoring of both of the statements or like when a feature is not motivated by adding a new capability, it should be expressible via desugaring with the existing JavaScript syntax features or otherwise the benefit needs to be understood to be worth the cost. I think that refactoring could be done for both of those statements. + +KG: That sounds good to me. I think that we were talking about composite keys like speaking of modifying the semantics after every map object. + +DE: Right. + +KG: That is not a case of clearly desugaring and modifies existing things. + +DE: It’s a new capability. I would consider these new capabilities, you know, even in these cases where it’s like, you know, you can express this in JavaScript. If you can’t express it well enough in JavaScript or well enough for the first one means you have to replace all of these existing things. For match well enough has to do with having to instantiate this extra map. So it’s not just about whether during during completeness-wise you could express it or whether it – + +KG: Okay. With the understanding that something like composite keys would be a new capability and evaluating it on the basis of whether that is—whether the cost of the new capability is worth it in terms of developmental model and things that are not intended to be new capabilities, then we ought to ensure they are pure sugar. + +DE: Yeah. + +KG: I’m willing to sign on to such a statement. + +DE: Awesome. + +MM: So first of all, let me just mention that this conversation just now between KG and DE actually covered very well most of what I had to say. So I’m very much on board with all of that. I think the way to think about this is that everything is a trade off. This is not making any hard and fast new rule. What it’s doing is making explicit a certain additional preference ordering to take into account in making these trade offs. And in particular, what it’s saying is really demote substantially anything that is accidentally not desugarrable and that if it’s anything other than sugar, both—I’m phrasing it in syntax terms but it actually covers both. Anything that is not decomposable to the existing language should have good reasons for not being decomposable with the existing language. Now, I want to refine a bit the nature of the preference order. So desugaring can be more or less syntactically local. And I would add to the preference ordering that desugarings that are more syntactically local are preferred to ones that need a less local transformation of the syntax. I’ll give two examples. Generators, async functions and async generators are local to the function they occur in, basically doing a CPS, equivalent to the CPS transfer of the function they occur in, but unlike cooperative concurrency stacks, they do not cause a general CPS transformation to be even thought of as a gedankenexperiment for the program as a whole. A further less local transformations top-level wait that causes transformation of the module as a whole, the top-level of the module as a whole. And in both cases, I don’t expect implementations to implement it via desugaring. That’s another dimension of the preference order which is there’s two motivations for not accidentally defining something that cannot be desugared. One is efficiency. And the other one is making the fundamental semantics of the language more complicated. Because async functions and generators and top-level await can be desugared, even if for implementation efficiency reasons nobody would implement it that way, the fact that it can be means that there’s a certain level of fundamental semantics of the language that is not being changed by those concepts. + +JHD: I think I will say the top-level wait can be desugared in that way. + +DE: I agree that top-level wait can be read in both ways. It becomes a little philosophical. Ultimately this wording could be used as an argument if we were considering it again against top-level wait because the argument would go most people need further entry point and putting it in the nested module and that is for Bloomberg and reason for new capability. It does really change not how the module graph works – + +MM: You’re correct. I will make it a counter factual example. I think you understand the nature of the example. + +DE: I agree with all of your points. If anyone is familiar with theoretical linguistics, optimality is having the constraints and to the thing that is most optimal with that order. And I’m not sure if we should structure our thoughts in committee that way but a great way to think about it. + +MM: I think it’s worth re-emphasizing something that I think you and KG agreed to which is having avoided accidental non-desugarrability, the requirements especially on syntax for both sides of the dichotomy things that can be desugared or things for good reasons cannot, the bar on both should be very high but for different reasons. Neither one is to be preferred over the other, which is both of them are to be preferred over something that can’t be desugared for accidental reasons. + +SYG: I wanted to call out that there is attention—even though I’m very supportive of the direction that it would be nice to have standard libraries that are actually specified as literal JavaScript, that comes with a bunch of hooks, because that’s just how JS works and that is often in tension with optimizability and making things fast because more hooks means fewer guarantees. All the usual reasons. So maybe that’s reason enough that if we try to do this direction of design, that it’s reason enough to motivate more language features for robust code. Things like the motivation for getoriginals even though getoriginals is problematic for different reasons. But this is the same reason why if you look at the browser engines we all have weird little DSLs for writing built-ins even if they are self-hosted with JavaScript. If the minute you self-host you will discover that it is not a good idea unless you basically make a different DSL that looks kind of like JS. + +DE: I agree it would be valuable if someone were able to solve the problem. I want to ask for conditional consensus on these statements with the reordering that we would resolve in more detail offline about—rather than saying by default, we’re instead saying, that, you know, for features that don’t add capabilities and then conversely when we add capabilities, it’s for reasons—so with the group of people who wants to participate in the consensus—on like the wording details, you could raise your hand or speak up later in the issue tracker and we could develop this online. Would people be up for that conditional consensus? + +KG: Quick response to SYG. When I say desugarrable to JavaScript, I basically mean desugarrable to JavaScript assuming no one messed with the built-ins. I think that’s how most users understand it and how we should understand it to mean here. + +DE: That’s what I wrote also. + +KG: Where is the comment about not messing with built ins? Given the original built in environment. + +DE: SYG clarification question. + +SYG: So one thing that on the syntax side here, less so by the libraries, on the syntax side, it does not say anything about that whether something ought to be in a native engine implementation or in a tool. I don’t disagree with the general design of the statement for how we should design features, but are you implying as part of consensus here that by layering things this way then they ought to also meet the bar to be implemented natively in engines? + +DE: We don’t currently define multiple languages. At the point when we get consensus to have multiple languages standardized by JavaScript, I think we could consider such questions. This is pertaining to the single language that we standardize. + +SYG: I see. + +DE: If engines are not going to add something, it won’t become part of the language standard. That’s our current practice. And we could consider other presentations about other proposals about changing that. + +SYG: So I want to be very clear that if I give consensus to that statement about syntax, that if we design a syntax feature that is pure desugaring, that significantly lowers the likelihood that we would want to support it natively in engines. I don’t think that’s a bad design but that lowers – that kind of anticipates my bringing up again of two different languages. + +DE: Great. I look forward to that future discussion. Under our current process, if engines refuse to implement something, it will not become part of the language. So I look forward to that future discussion. + +DE: Could we have just five more minutes to go over the rest of the queue? + +CDR: There’s only two items. One is end of message. MM says +1 on general direction and holding back on current wording. + +DE: I would like to ask is this something that you want to iterate on the wording offline or confirm it at a future plenary? + +MM: Iterating offline is fine. + +NRO: What does it mean for us to have consensus on the two things? We have consensus when we advance the proposal, would we have a check saying of advancement and match this with the consensus process part? + +DE: I don’t think we need more checklist items. Instead, this statement is an admissible argument next time something comes up for discussion. If someone feels a relevant point to bring up, they can say remember we agreed on this design goal. This is the reference point. I think this applies to – + +NRO: It is setting a precedent in some way? + +DE: I don’t like that wording, but sure. Do we have conditional consensus with the module off line? Who explicitly supports this. Chip and MM thumbs up and OMT and LCA and I think that’s consensus unless there is another interpretation. + +CDA: Unless anybody speaks up in opposition. + +DE: Any non-blocking opposition or points of concern? Points to think about? + +CDA: Not seeing anything. All right. Thank you Dan. + +### Summary + +* DE argued that: + * Most features should “layer” on top of existing features, and only some add new “capabilities” + * When a capability is added, it should be because that’s the actual point of the proposal, rather than just being an incidental choice + * When it comes to syntax features, DE asserts that bytecode interpreters are under similar constraints to transpilers. Both have faced expectations from people not involved directly in them that they could perform non-local optimizations reliably, but this is not the case. Instead, both benefit from simpler, locally analyzable/desugarable designs. + * Proposal: Most features should layer, and the ones that don’t should be adding a capability for a reason. +* (Summary of main discussion points) + +### Conclusion + +* Conditional consensus on a modified version of the statement: Rather than asserting that there should be many sugar/library features, the design principle statement should focus more on the negative: +* When new features include new capabilities, this should be for a particular reason. +* This design principle is not an entry on a checklist or requirement for stage advancement, but rather a reference point for future discussions, a permissible argument in committee. +* Delegates to collaborate on GitHub to finalize wording, including review from MM, SYG, KG, NRO to ensure that it resolves questions that they raised + +## Continuation: [Decision Making through Consensus - take 2] + +Presenter: Michael Saboff (MLS) + +RPR: I already clarified this with MM outside. But we already have precedent. In general, I agree with this point that we do operate with unanimity in the role and I think it’s also important that we have modified the process in the past to say that we can go forward even without unanimity in very narrow cases such as the clarification we made to what is acceptable when blocking Stage 4? + +MM: Let’s not just have the discussion where we come back to Michael’s topic. Yes. + +PFC: MM mentioned rules needing to fail safe, I think I would not say that the current situation is a rule that fails safe. I would say it is a rule that fails in an acceptable way to most of the people currently on the committee. That is one of the things I took away from MLS’s presentation. I think an example of that is this morning we talked about a number of examples of bad ideas where are somebody could have vetoed it but didn’t or was persuaded to yield. Those are valid examples and they stick out like a sore thumb in our memory because regret is a very strong emotion. But we don’t talk about the vetoed ideas that would have worked out great because that is just not something that we can know. You could call that failing safe, but I’m not sure I would agree with that. I just want to point out the negative ideas for which we feel regret are not the only examples of the process not working as we Intended. + +MM: I don’t know how to read the queue. This is just—this is not from CDA but restoration of the previous queue, right? + +MLS: Regulations made in the comment. + +CDA: So I’m putting the individuals three letter acronym or two letter for those grandfathered in or else it looks like me throughout the queue that is not accurate. Next we have—did we skip ahead? Phillip now. + +NRO: When we talk about requiring more than one person to block, we should consider whether more than one person means that two people from Bloomberg, for example, can get together and provide support to each other and whether we require it to be two different organizations so that companies that send one delegate are not at disadvantage compared to companies that send four or five. If you want this to be at least two organizations, then we should also consider cases like Igalia paid for Bloomberg working on proposals. Would they be able to decide together to block some things and Bloomberg is being asked to do so. This is not just as multiple companies here have financial relationships with other companies in the committee. So we need to be careful about this. + +MLS: I agree with you. I have had conversations about this. I agree that we probably if we require multiple people to block something, they should not be from the same organization but we do have financial relationships that are not clearly known at times. + +JSL: Agreeing. It does not work if they’re from the same company or from the same—or have that financial relationship. That’s going to have to be whatever new process we have here is that it has to come from somewhere. + +PFC: I also want to suggest that when we think about changing the rules we build into the norm that we revisit the rule and see if it needs to be changed after some number of years. I see the current rules around vetoing as appropriate to a different time in the committee’s history and I suppose it’s appropriate that we revisit it now. We may come up with a rule that is not suited for purpose in five years, so maybe we should revisit it in five years. I would love to have a mechanism by which it just shows up on the calendars that we need to consider that, rather than somebody having to spend a lot of emotional and social labour to bring it to committee every so often. + +DLM: I wanted to bring up the change we made a couple years ago and had local supporters for advancement than taking silence as consensus. I see this proposed change in alignment with that kind of makes sense to me that we should then possibly have two people vocally block consensus. I do think it should be two, though. I made the same point later in the queue, I don’t think the reason to make a 5% rule. I think 2 is probably sufficient. The other thing that is interesting is the point about financial relationships or delegates from the same company and if we’re going to require like a clear separation between two people to veto proposal then we should require the same rule for two supporters for it to advance. + +SYG: Along the point about having active support, I want to give a framing that I’ve been thinking about especially with what has happened with decorators and ShadowRealms and the contrast between TC39 and something like WHATWG is that our culture of vetoes and blocks I think for many reasons not the least of which is the social and the emotional cost of being kind of the lone blocker I think TC39 operates more on a can we live with—sorry, if you think of a spectrum between we can live with something versus we are really enthusiastically supporting something and think of that as a spectrum the stage advancement in TC39 can kind of run the whole gamut and not got to that and workshop and compromise enough so that everyone can live with it and there is no strong active interest especially from the browser vendor to do a thing. And that kind of thing can still get stage advancement in TC39 whereas that kind of thing has a much lower chance to get stage advancement or agreement in a body like WHATWG and it can happen if a proposal advances to a later stage but is actually in the—nobody is really that actively interested in it but we’re grudgingly think we can live with it and if that is the state that the proposal is in, I think that is bad. I would like to—and I think that is bad. I would like to try to fix that with better process. But from where I’m sitting, it seems like the blocking culture of TC39 directly plays into us getting reluctant stage advancement. So if I can have input here, I would like to nudge us towards having more stage advancement means active interest. + +EAO: Just what I noted there. I generally support the idea. But I think having two people support a veto is enough. The 5% part of the rule just seems way too complicated. And counting whether or not we happen to be over 40 people or whatever the other limits are require three seems unnecessarily complicated. + +MLS: I can live with that. + +JSC: There’s been talk about the social cost or the social pressure appears about being the first to announce they are blocking. This is not a formal thing. We have got TCQ here where a lot of people sign in using their identity. Just an idea is that we could have people report that would like to block a proposal and that it would show account of how many people are saying they would block. And if at least two people appear or however many meet the threshold for actual blocking, then their identities would become public on the part of like maybe the chair would review or something. So basically the idea is that your identity would not be revealed if you’re the only one who signals to TCQ that you would block. If you have another person who also signals that they would block, then you could reveal it together and perhaps that would mitigate this social cost or pressure of being the only blocker having—and being the first to announce that you would block. You will be able to see that someone else agrees with you on blocking. Basically the idea of perhaps we can leverage TCQ or something like that. In general for signaling, what SYG mentioned earlier on signaling general sentiments, I think we could make leverage TCQ to have people signal their general attitude or temperature towards proposals like I could live with this versus I would like to block this especially if someone else would also like to block this. I know that would get complicated with like delegates, like, organizations versus individual delegates or whatever, but that’s just an idea. Am I proposing this instead of the two-person block change?, asks LCA. I don’t think this is an instead. I think this would be we could use this if we have like the two people are needed to block, we could use TCQ to execute and to implement it by having—if there’s only person who would like to block, their identity would remain hidden but at least two people would like to block than TCQ would show it to everyone and the chair would reveal their identities to everyone because they’re blocking it. Does that make sense LCA? + +JSL: I would modify that just a bit. About the secret block vote, and I don’t think the step of revealing the blocker is necessary. Using a tool like TCQ to at least take the temperature check of the room on kind of where are we at on this makes perfect sense, but it can also remain completely anonymous. It’s just like, we’re not very enthusiastic about this or this is something that for sure we want to—also want to render a specific question. If we are going to do that, you have to be able to have specific question and what is the temperature on the question? + +JSC: We have temperature checks using TCQ with emoji from what I recall. Exactly like that. But I think it’s important to have it be shown with a specific question like would you block this? And have it remain completely anonymous. And positive, positive and following, and confused, instead of something like this, I would like it to be clearly “would you block this” and have it remain completely anonymous. I think that would be productive. Does that go with what you’re saying? And mitigate the reluctance of people to block, by giving information anonymously to people whether they would be the only one who would block it or not. + +CDA: We have several replies. I just want to note that we have like a little over eight minutes left for this entire topic. So if people could try and be brief so we can get to the following topics, that would be greatly appreciated. So PFC. + +PFC: I would be very wary of any sort of anonymous voting and would prefer that we only use that in situations where it’s absolutely necessary like voting on the chair group and maybe, say, personal safety reasons to keep your vote anonymous. I don’t think it should be done in the case of voting on proposals. My goal here is not to remove all of the social costs of blocking. I think you need to bear some of the costs and if you’re going to veto something, you have a responsibility to make clear why you’re vetoing it and do it in a convincing way. So it would not be my goal to remove that burden from somebody who wanted to block the proposal. + +CZW: I think what I’m saying is related to PFC that what we have been doing is blocking and we need to work out how to unblock the proposal from advancing not just adding a +1 to block and without leaving a path to work out how to unblock. + +JGT: Just sort of generally anonymous blocks tend to have a poor track record in political science in general. And so they don’t bring out the best behaviors in people. Often that social cost is helpful. So I would not—I’m sort of with PFC and definitely not want us to get into position where we enable it some anonymous person is holding things up. That doesn’t sound good. + +JSL: I think it’s important there to understand what we’re talking about with the anonymous temperature check is not an anonymous block. Blocking would still have to be explicit. They still have to raise their hand. I want to block this. And I think it’s important to speak to there is social cost can’t just be like I don’t want this to move forward. If the proposal has been adopted by the committee and says we think this is something we should work on, then whoever is blocking advancement of that does have a responsibility to figure out how to unblock it. It’s not just I don’t want this to advance, it is I will work with the champions and figure out a path forward that does work that does advance the work of the committee. Otherwise, if that doesn’t happen or if they get together and they work that out and they still can’t find a path, then it becomes a committee decision on does this advance or do we park this? It’s not one person blocking it at that point. It’s the committee deciding, yes, there’s not a path forward. So, you know, we can’t define what the social cost is and part of it is you have to work to advance it within reason. + +JSC: Just adding on to what JSL said. The concerns about actual anonymous blocking make sense. I’m just talking about having some sort of temperature check showing people, telling them if I were blocking, if I blocked, would someone else also block too and then you have to justify everything and whatever. If you know the answer yourself and can predict what the other is going to say…If you don’t want the proposal to advance, you don’t need this question or whatever. But it still could be useful, I guess, for other people who would want to. But if you already know you don’t want the proposal to advance, then you can just block it or whatever anyway. That’s all. + +SYG: This to respond narrowly that the blocker should be responsible for moving the proposal forward another way. That doesn’t make sense to me. The blocker has an obligation to explain and articulate a reason why they’re blocking, but sometimes the reason is they don’t believe that this problem is worth solving or something. That doesn’t make sense to me whoever blocks then takes responsibility to advance the feature. + +JSL: Might be that I misspoke. That’s not quite the approach. It’s the person that is blocking has a responsibility to try to find a path forward. A path forward does not mean necessarily advancing that proposal. It might mean you agree to disagree and this thing just needs to be parked and there’s no way to move it forward. + +SYG: I see, okay. + +MM: I want to recount a conversation that RPR and I had in the hallway that RPR mentioned a part of. Very much surprised me and moved my position towards MLS’s, which I did not expect. First I just want to mention the thing that came up here which is, well, it shouldn’t just be two people, it should be two people from two different—we should represent two different orgs and should be two different that don’t have financial relationships. That’s a perfect example of a slippery slope mechanism that RPR and I did discuss before we came to the interesting insight that let us in your direction. But I think that’s also worth recounting. Any rule can be gamed if the rule is just two people, if the rule is two people from two separate orgs I know how to game that. It would be harder but I would do it if I needed to for what I consider to be good faith reasons. I would simply not do any of this if I didn’t consider my reasons to be good faith. And then if it were two people from two separate orgs with no financial relationship, I know how to game that too. The problem is that every step of escalation of the rule to try to avoid some gaming problem causes the person who needs to block it for what they consider to be good faith effort to escalate their political manipulation to keep it blocked which creates bad feedings that causes the rules to be changed to escalate further and what you’re doing is every step of this is weakening social norms in the attempt to replace it with formal rules. A lot of why we’re working is because of the general good faith respecting of the social norms that we have written. We have written them in How We work. Many of the others are just sort of things that evolve in the air as a shared ethic and many we don’t know how to articulate, but we have good social norms and rules can start killing social norms by replacing it with what looks like politics that leaves a bad taste in people’s mouth. + +MM: Okay. Now, the two weakenings of my position that take us in your direction that RPR and I came up with, once something has reached Stage 3, Stage 3 is explicitly a signal to browser makers in particular but to everybody you can now invest heavily in this thing because it will only be stopped for very extreme reasons. So weakening single veto between stage 3 and 4, I’m open to considering it. Now, what the particular rule is for weakening, I don’t know. And that would have to be part of the discussion. I’m not agreeing to any of the particular rules that were mentioned here. But I’m hoping to the idea of something weaker of single veto between 3 and 4 because of the magnitude of investment and therefore the magnitude of the cost if it’s blocked from 4. That’s one. RPR, please after I—you know, correct me if I’m mischaracterizing anything from the conversation. + +MM: The other one is that rather than the objector having to get a second person to object, which I find unacceptable, instead what we came up with I think was very interesting. And I’m wondering in particular MLS your reaction to it that is the objector has to get another person on the committee to agree that their reasons for objecting are good faith. The other person might disagree – might support the proposal, might hate the fact that there’s an objector, but they agree that the objector is holding their objection in good faith. That’s an adequate block. If they can’t get one other person to agree that the reason is good faith, then we would have to word it carefully to not lead to politicking opportunities but I would be willing to say if you can’t get one other person to agree the objection is in good faith, maybe that is not an adequate situation for blocking. + +MLS: I considered the social norms we have in place already included that but explicitly specifying that I think is good. Going back to your Stage 3 to Stage 4, it sounds like—and I think we already have the social norm as you increase in stages it should be more difficult to do that. + +MM: But more difficult is right now just in terms of the norms, not in terms of the rules. And I’m willing to consider a strengthening of the rule against blocking. I don’t have a particular proposal that I’m prepared to agree to but I’m open to considering a rule change that would weaken the ability of a lone objector. + +MLS: Okay. + +MM: Am I getting it? + +RPR: I think we have a few ideas and definitely the ones that you were saying were part of that. I think perhaps the slight refined version of that that I chatted with CDA and actually it came from CDA was that it might be—this kind of needing to get a second supporter to block might be something that because we only want to employ this in emergency situations and don’t want to change the general nature of the conversation, it might be something that came out after like a cooling-off period. So we would do it at perhaps one meeting later. At that point, we would then seek someone else to speak up in favor of the block and the degree to which they speak up in favor of it whether it is in good faith or something else. We could perhaps iterate on that. + +MM: I’m unwilling to agree that the second person has to actually object. + +RPR: Sure, yes. + +MM: Even for a cooling off period. Except maybe during the 3 to 4 where I’m open to other suggestions. + +DE: The comment is about this. We already established a rule for 3 to 4 by precedent during the class fields discussion where we said you can’t object because you disagree about the design during 3 to 4. It has to really be for, you know, implementation-based reasons. We had somebody saying I object. Or actually a number of people saying that and then we said, no, this doesn’t make sense. And we proceeded. So the thing is with our current veto-based process, we end up on this path of needing to at great cost to us all invent these detailed legalistic explanations for why we can do things. If we have procedures that were in extreme cases based on super, super majorities with the extra pauses (?), I think we would be able to get past these things without nearly as much strife. These things cause actual problems for us. Any way, the particular case of 3 to 4, there’s really no action to take. We don’t – + +MM: I’m confused about the norm versus rule there. Somebody can say that they’re blocking for the reason that is considered to be legitimate and somebody who wants to block it can claim they’re blocking it for the enumerated reasons in the non-good faith manner. Is our current operating rules one in which a claim to block it for those reasons can be overruled? + +DE: So, you know, this is—like, I was saying before about it, people have different interpretations what is going on with respect to blocking and procedural things in committee. It is ambiguous. Previously in class fields people claim I’m blocking and then it happened. So whether this was the chairs making the determination that it was ambiguous or meaningless or that being an emergent property of the committee is ambiguous. Maybe more the latter. At least I think that’s what the chairs might have wanted at the time. I’m not really sure. But we end up working on getting through these issues through a huge amount of extra mental effort and extra kind of case by case decision making and everyone worried about overstepping and it’s a distraction from the language design work. + +MM: I don’t think it’s a distraction. I think that overruling, overcoming an objection should have a very, very high bar. + +DE: Yes, agreed. + +MM: And that the nature of the process we need to always be talking both about the rules and the norms and overreliance on rules can really be disruptive. + +DE: Yes. So I agree completely with what MLS stated at the beginning which is what we don’t have shared norms here. We have different people who have different practices in terms of what they feel is appropriate for blocks. And this gives disproportionate weight to people. We have to really make sure that we can be open about all of the different concerns that everyone has and not overemphasize the concerns of people who feel more like blocking. + +MM: The other aspect that I think is very much worth being explicit about is that each browser maker de facto has a unitary veto if the browser maker says we want to implement something, it doesn’t mat warm air the committee does. And in general on the theme of rules it’s hard to understand what makes TC39 part of its character that’s something I love is that TC39 itself has no enforcement power, the coupling between TC39 decisions and what anybody else outside of this room actually does is only through norms. + +DE: This conversation is about getting those shared level playing fields communication and consensus determining practices. That’s what we—I think we agree on this. This very kind of philosophical point. + +MLS: I need to get going. Please continue and put this in the notes. + +LCA: I want to second MM’s comment on rules and gaming of rules. I hundred percent agree with you, if we come up with stricter and stricter rules about what is a valid detail, we will just whoever wants to veto will want to game the rules. Ultimately this will always end up being a case by case decision that the committee has to take just like it is right now. We may decide to do something about a veto and ignore it or we may decide to agree with the veto and not to ignore it. + +RPR: I’m saying thank you so much for presenting this topic and all your efforts. + +MLS: I enjoy the rich conversation that’s resulting from this. + +LCA: I don’t think it makes sense for us to continue to escalate in any way because if we do that now, we have to do it again the next time somebody blocks it. It won’t change the ultimate thing that – the situation we’re currently in where every time somebody blocks and maybe a majority of the committee does not agree with the block, it ends up being a case by case decision that ends up with some people being upset. + +REK: I wanted to make a comment regarding the notion of financial relationships between member orgs because it seems to me the spirit of the comment is about disclosing conflicts of interest should we adopt this role to have at least two blockers or whatever the number is. I would caution against trying to define like a particular notion of an outside relationship or a conflict of interest because it seems like that would put the blockers in a position of potentially improving the non-existence of a financial relationship and it also raises a lot of questions about what is a meaningful financial relationship or conflict of interest because for some of the organizations that belong to this committee, you can imagine that there are some trivial contracts or financial relationships that exist between them that committee members aren’t even aware of. So I would just like to generally caution against the notion of encoding specific language like financial relationships should we choose to adopt this process. + +JLS: Just two points. One I just want to clarify because I said it a few times and others. I don’t think anyone suggested that seconding a block, someone blocking and someone else saying, okay, I support this block implies that that second person also wants to block. And speak to your point, mark, you’re absolutely spot on. It might just be that, yeah, I might disagree but I see where you’re coming from, yes, we can go through this more. + +MM: The standard of somebody else agreeing that the blockage is in good faith is consistent with that same person voting to advance. So once again, denotation and connotation. Phrasing it as “I second the block” gets the connotations wrong. + +JLS: Yes. One other comment with what LCA was saying and other comments adding new rules here should be the last resort. Adding new policies should be a last resort. Coming to find a policy to make this better, yes we can. Everyone will hate it but yes we can. It should not be something that we reach for now. If we can come up with a better social norm that we all agree to, that’s by far the better approach than devising new policy. I’m happy to help on devising new policy if we need. + +JHD: I put this on the queue when MM was speaking. I think three to four is not where people are concerned with and I think that somebody gaming the rules in bad faith is something that we’re all concerned with. The social cost of objecting in good faith is incredibly high. The only reason this entire room doesn’t hate me because they all understand I’m arguing in good faith and willing to discuss and so on and so forth because I have been an objector many times and I have found it seems like the whole general state of affairs holds true with other objectors who I am present for side discussion about is that it’s generally understood and appreciated when frustrating lone objectors are still doing it from a good place. And that matters. That mitigates the social cost a lot. Therefore, someone arguing in bad faith, it will not take very long before that is transparent and the social cost of doing so becomes very, very high. And so far, I have not seen any nefarious throwing of bodies at—you know, bringing new people in to burn all the bridges until they get their way. That’s the only failure mode I can think of of what I’m describing. So I think assumption of good faith and as JLS and others have alluded to, making sure that if you’re an objector whether you’re alone or not, making sure you’re accessible and available for discussion of paths forward I think that mitigates a lot of the social cost. It’s still not going to be conducive to every personality type. I feel like most of the time this is a friction—has friction and frustrating but nonetheless a functional process. + +CDA: We have +1 agreeing with REK’s comments from LCA and OMT. CM is next. + +CM: I am very sympathetic to the concerns that MLS articulated. But I am also observing we have been, I think, reasonably successful with the current process for going on a couple of decades now. And I am nervous about the consequences of making major changes in that process being disruptive and destructive in ways that we cannot foresee. The discussion here—there’s been a lot of ideas that people have put forward, which I think were well intentioned, but feel like a lot of rules lawyering, to capture nuances by making the rules more precise or making the rules more detailed. I think these notions are sort of missing the point. I think we might benefit from clarifying the norms, in the documents about how we work and all of that being much more explicit or expansive about what the norms are, possibly evolving or articulating the norms in more detail, to address some of the issues which I think have legitimately been raised. But I am very nervous about making a change that turns the whole process on its head + +AKR: Yeah. Yeah. I also had the feelings that CM mentioned. + +PFC: The same thing I said before about rules that fail safe, and how that’s distinct from rules that fail in ways that are acceptable to people in this room. I think saying that we have a reasonable track record of success may be true, but it is also subject to survivorship bias. We have a reasonable track record in the things that matter to the people in the room. That’s a fair point. But I think we have not seen examples of things we know would have been good if they had gone forward, that were stopped by somebody digging in heels at the right moment. The thing is, the loss that is prevented and the gains that are missed are all ultimately hypothetical and speculative. But it’s often easier to foresee short-term harm than it is to foresee long-term benefit. And I think that gives some validity to the survivorship bias argument. + +JHD: I had a comment there. I mean, opportunity loss is less bad, but ship badness. Another way to rephrase it would be, of course, I immediately forgot. Let’s say you have an idea that you—that was shut down because of problems that this proposal hopes to resolve, bring it back. If you are enough energy to do, if it’s a good idea, convince someone else. The only things I know of where some—I know something was a good idea and it seems permanently killed is when the people who had the energy and time to bring it back stopped doing so. + +PFC: Yeah. + +JHD: That is a failure mode, but, like, it’s—you can bring it back. + +PFC: That’s exactly what I mean when I say that success is in the eyes of what is acceptable to the people in this room. + +JHD: Okay. + +PFC: Because MLS mentioned, people have run into roadblocks and then left. Those ideas aren’t coming back but we don’t know about them. + +CDA: Sorry. We have SYG next. + +SYG: Yes. Similar point. I think Jordan what you say kind of reveals that you prefer a certain disposition of person to be participating here. Like, I don’t think it is—this is something that MLS called out explicitly in being welcoming to new contributors and that kind of thing. Like the failure mode ought not to be super human persistence. That doesn’t seem like a good thing to expect of people. + +DE: Yeah. Just agreeing with SYG. There are real serious opportunity costs. We do lose proposals because we—family are made to not feel welcome. This is kind of core of diversity and inclusion work that we have talked about many times in committee that I think deserves continued emphasis. People can be so people have limits. We encourage good work to be done. + +CDA: All right. That is it for the queue. + +RPR: Yeah. I would be happy to say summary notes, what we heard. I am not going to capture everything, but I think we have generally agreed there are problems to be solved here. Really appreciate all the different suggestions that have come in. We recognize this is a very delicate matter. And we really want to make sure that any suggestion, any proposal here, any cure is not worse than the disease. And this is something that the chairs have spent time digging into, thinking about the past and we are open to ideas. Not just here in plenary, but outside. And we are very happy to work with people who have energy and ideas for taking this sport. We are appreciative of the discussions we had here today that show some signs of light at the end of the tunnel. Does anyone else want to provide any summary statements that we have heard? + +CDA: I guess I would add that I think we have heard, you know, of course this started with MLS’s slide deck and statements of the perceived problems. But I think we have uncovered kind of a broader category of problems. And potentially with a broader number of solutions. So I think we are all interested in things that improve our processes. So looking forward to continuous improvement. + +CDA: All right. With that, that brings us, I think, semi—that’s through our scheduled topics anyway. How are we doing, DE on the breakout topics? + +DE: We have 15 breakout topics proposed. I would encourage you to go to the breakout topics task. There’s a link to the Google form. Where you can vote for which breakout topics you are interested in. I think we can leave a couple of minutes and have a short break maybe for that. For voting. And then maybe we have time for 2 sessions. Or maybe it should be one session. Given there’s only an hour and a half left + +RPR: So thank you, everyone, for participating in this week. I think it’s been a meeting to remember. Thank our hosts: Thank you, Michael and Kevin for arranging an excellent venue; it was a superb social, as well on the Tuesday night. These things take a lot of energy to organize and so thank you to F5.