From 950b7cee18c5e825ce4e7a9db8892e89c8610657 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Sun, 17 Aug 2025 15:46:57 +0200 Subject: [PATCH 1/3] Edited transcript from the 109th meeting of TC39 --- meetings/2025-07/july-28.md | 1106 +++++++++++++++++++++++++++++++ meetings/2025-07/july-29.md | 1214 +++++++++++++++++++++++++++++++++++ meetings/2025-07/july-30.md | 1008 +++++++++++++++++++++++++++++ meetings/2025-07/july-31.md | 711 ++++++++++++++++++++ 4 files changed, 4039 insertions(+) create mode 100644 meetings/2025-07/july-28.md create mode 100644 meetings/2025-07/july-29.md create mode 100644 meetings/2025-07/july-30.md create mode 100644 meetings/2025-07/july-31.md diff --git a/meetings/2025-07/july-28.md b/meetings/2025-07/july-28.md new file mode 100644 index 0000000..c79a281 --- /dev/null +++ b/meetings/2025-07/july-28.md @@ -0,0 +1,1106 @@ +# 109th TC39 Meeting + +Day One—28 July 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Jesse Alama | JMN | Igalia | +| Dmitry Makhnev | DJM | JetBrains | +| Waldemar Horwat | WH | Invited Expert | +| Guy Bedford | GB | Cloudflare | +| Chris de Almeida | CDA | IBM | +| Daniel Minor | DLM | Mozilla | +| Zbyszek Tenerowicz | ZTZ | Consensys | +| Jordan Harband | JHD | HeroDevs | +| Sergey Rubanov | SRV | Invited Expert | +| Chip Morningstar | CM | Consensys | +| Nicolò Ribaudo | NRO | Igalia | +| Mikhail Barash | MBH | Univ. of Bergen | +| Keith Miller | KM | Apple Inc. | +| Aki Rose Braun | AKI | Ecma International | +| Samina Husain | SHN | Ecma International | +| Olivier Flückiger | OFR | Google | +| Richard Gibson | RGN | Agoric | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| J. S. Choi | JSC | Invited Expert | +| Eemeli Aro | EAO | Mozilla | +| Tab Atkins-Bittner | TAB | Google | +| Istvan Sebestyen | IS | Ecma | +| Daniel Rosenwasser | DRR | Microsoft | +| Michael Ficarra | MF | F5 | +| Andreu Botella | ABO | Igalia | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +RPR: Okay. According to the international clock, I think it is time to begin the meeting. We have a good. We have 20 people in here. I think we’ll have a few more. So welcome, everyone. To the 109th meeting of TC39. All very excited to be here. This is obviously, a remote meeting. Oh, dear, sorry, I’m—I’m, my windows wrong. Here we go. So, let’s get stuck into it. We have our chairs here today. I think, all three. Myself, Rob, Ujjwal, Chris joined by our facilitators I know Dan minor will be here for quite a bit, as well as Justin will be with us as well. I’m not sure Daniel, we shall find out. And let’s just, let’s just check on the notetaking upfront. I think we have Carrie as transcriptionist. Is the doc all working? Yes! It is. That’s good. Please could we have a couple of volunteers to help out with editing the notes? + +RPR: All right. On we go. So please can everyone remember to sign-in using the regular meeting entry form. I know sometimes people get the URL to log into this video by other means, but that’s the official route. And make sure that we get all of the correct legal tracking information. IP connection. We have a code of conduct. Please do read it. It is available on the website. TC39 dot ES. Do your best to reflect the spirit of it, not just the, not just going very finally litigating through it. As we all are here to have a good time and to be respectful to each other. We are on U.S. Pacific Time this week. So we got our usual remote two hours, plus two hours schedule. Our comms tools in some ways have not changed. In some ways everything has changed in terms of TCQ as our primary tool. Today if you look at the URL you may have noticed the reloaded part in it. Please do have a go opening this now. The link is on the reflector post. The reason is worth logging in early to this is that this is the new version implemented by CHU. Is CHU with us today? I do not think so. So, we’re using his new special deployment and upgrade of BT's orange TCQ tool. This is all enhanced. This is all very bespoke. I’m sure it will be high quality. But if any issues do arise, any technical issues, please do call them out. If they are of a blocking nature, if you cannot communicate using TCQ, please escalate to the chairs, that would be kind of point of order worthy. + +RPR: So for anyone who hasn’t used TCQ. You can see a screenshot all of the queue of our agenda topics. In the middle, if you choose the queue section of the app you will see four different buttons of increasing invasiveness. So, we start on the left, on the blue. Well, sorry, this is the screen we should not have anymore, if you see his, you’re using an old version of TCQ. But normally, we prefer new topics, that’s the gentle way of suggesting what to talk about next. You can also discuss the current topic or intervene with a clarifying question or just stop the show entirely if there is something significantly, like something is completely broken and you can enter with a point of order. + +RPR: We also use matrix for the realtime chat. Hopefully, everyone should have been on board toed this. It is part of our standard onboarding procedures. But most of the conversation relevant to today’s topic will be in TC39 delegates and on topic in the zone. + +RPR: We have an intellectual property policy. So a lot of people here, perhaps most people will be delegates from their member companies or institutions in which case, your institution will have signed the appropriate forms. Otherwise, we also have invited experts that are confirmed by the ECMA secretariat, subject to completing the form. Everyone should be signing the royalty-free form. If you have not signed any of this, then we interpret that as you are an observer. You're welcome to observe in a read-only way. But please do not speak. All right. + +RPR: Also, another important notice is that we are transcribing this meeting. And so be aware that a detailed transcription of the meeting is being prepared and will eventually be posted on GitHub. You can edit this at any time during the meeting for accuracy, including deleting comments you do not wish to appear, and you can request corrections up to two weeks after the meeting completion, and afterwards making a public PR in the notes repository or contacting the chairs. + +RPR: Our next meeting is another remote meeting. This is in roughly two months time. And then following that, I’m very excited the next, next meeting, which is in Tokyo, that’s the, that will be hosted by my company, Bloomberg, in the same building where we were approximately, I don’t know—not that long ago. 18 months ago. So, yeah. Please come along to that. I think MF can speak highly of the capybara cafe, you need to book very early for that. You need the book now if you wish to do it. All right. We have notetakers. Can I get approval for the last meeting’s minutes? That would have been the meeting in A Coruña, at the Igalia office. If I—will ask for any objections to approving those minutes. I see nothing on the queue. No one speaking up. So we can consider the previous minutes are approved. All right. And we have our current agenda along with the associated schedule that CDA has posted. Draft schedule. Are there any objections to that? Nothing on the queue. So the agenda is, for this meeting, is approved. And so, the next up will be SHN with the secretary's report. + +## Secretary’s Report + +Presenter: Samina Husain (SHN) + +* [slides](https://github.com/tc39/agendas/blob/main/2025/tc39-2025-029.pdf) + +SHN: Thank you. Great. I have a short update, since our last meeting to share with everybody. Thank you AKI, for helping me put this report together. And add any comments you may like to make, just do so as you see needed. I want to talk about some recognitions that we have had, the Ecma recognition award, the GA approvals and some new projects and members. The slides, as always, talk about the code of conduct, which we have already talked about. Invited experts and the list of documents that are available for the community read, TC39 and GA documents. And, of course, the next schedule, which RPR has already mentioned. + +SHN: Recognition awards. Ecma gives recognition awards which honors individuals that have been involved with Ecma and been doing a considerable amount of work and involved in multiple technical committees or the executive committee. We honored four individuals in this last General Assembly. You all may not know everybody here, but I wanted to share the names. Touradj Ebrahimi has received an Ecma recognition award, he has been active with Ecma for over 20 years and quite a contributor in TC31, TC51 and a number of other committees, he represents EPFL, a not-for-profit member, which is the university here in Lausanne. We extend congratulations to Touradj. Then we have Hyun Kahng. I apologize I mispronounced his name, for contributions in TC51. And also the liaison officer with the relationship that we have with ISO in SC6. He has been active also for 30 years in both in ISO and in Ecma, and has made a lot of contributions in that technical committee 51 and with the preparation in multiple different standards. We also extend congratulations to him. + +SHN: As many of you know, MLS has taken retirement from Apple. MLS has been, is the process of becoming an invited expert with TC39, which I think is excellent. MLS has been active within Ecma for a number of years and served as the chair for our ExeCom for three years, that was excellent. He has also been, as you know, a very active member of TC39 involved in multiple different areas of discussions. I think MLS has been a great individual to work with. He has brought a lot of benefit to both the ExeCom and TC39 with his style of working. His approaches, and his contributions. So I’m not sure if MLS is on the call already today, but I would like the committee to please recognize MLS and congratulate MLS for his efforts. He is not on the call right now, we look forward to seeing Michael at the next call as an invited expert. + +SHN: And last, but not least, Rob, congratulations, RPR, you have been nominated by your committee and members for this award. RPR, you have been working very hard with TC39. Your work has not gone unrecognized. We thank you for all of the efforts you take to ensure TC39 runs as smoothly as it does. Together with your co-chairs and with your leadership we have discussed many topics, sensitive, active, and exciting conversations. You always kept everything organized and found solutions to move forward. RPR, thank you very much for your contribution. I will stop for a moment and let the committee congratulate you on receiving this award. Congratulations, RPR. There should be fireworks and clapping and so forth. + +SHN: The next line of congratulations is for the approval of the ECMA standards. ECMA-262 16th edition was approved by the GA in June. And ECMA-402 12th edition. Congratulations to the committee for your efforts and work. To complete the two. I know you’re beginning your efforts already for the 17th and 13th edition. So that is fantastic. I also want to recognize all of the editors and all of the chairs of the subcommittees with their work that goes in, and recognize AKI for her work to ensure that both of the standards and other work that’s coming out of the TC39 are all being—very, very nicely made into PDFs so we have an excellent—archive of all of these standards. So, congratulations. + +SHN: Some new work that is going on. I have already mentioned TC56, which was launched in December, it is our first AI technical committee. That technical committee is looking for participation. They put a call for participation for some of the work they are looking to do. I specified it here on the slide. There are comments they are looking for and there are definitely details available on GitHub. You see the slides or get access to the slides; those are all active links. You should be able to reach it and get any information or provide comments that you want. If you have further questions or clarification needed on this particular technical committee, don’t hesitate to reach out to me or Patrick. There’s an email provided there. Where he may be able to also guide you with some of the input. I think there are many members on TC39 that may have interest or may be able to bring™ value to the AI technical committee. So if you see this of interest for you or other members of your organization, please take the time to look at it. The deadline for comments is coming up on 15th of August. + +SHN: We have some new members that we welcomed at the general assembly. We have six new members. I have highlighted Consensys Software which is an associate member and have been very active with TC39. This is represented by CM and a number of his colleagues. So thank you for joining. It is excellent. The other members that are listed there, Socket is for TC54. Which is on SBOM. And Fordham is for TC56 on the AI. Apache Software is for TC54. And Kindai University is for TC51. And the University of Buffalo is for TC56. Which is also an AI. So we have a variety of new members working on various different TCs needs which is excellent. Always appreciate that. + +SHN: So, as I mentioned, it is a short report based on our last meeting. I will do a review third quarter of all of the invited experts as I always mention. I am continuing my relationship and engagement of W3C for any more work we can do either with technical committees or to create new technical committees, please also, there is interest for review, as a committee, if you are interested you can reach out to AKI or myself to see what we can do. + +SHN: The annex has the usual information. I’m going to jump through to the schedule so as Rob mentioned we have another remote meeting coming up. And of course, we’re always looking forward to the in-person meeting hosted by Bloomberg in November, at the venue almost two years ago, I’m sure it will be equally excellent. Dates that are important for us to know, also because of standards you would like to publish, to keep in mind as you book our dates two years in advance, the general assembly dates are noted for you as we plan the TC39 dates for the 2026 year. + +SHN: And I think that is the end of my slides. I’m just going to pause for a moment and ask AKI, did you have anything further to add to the slides as presented? + +AKI: No, I think that covers it. + +SHN: Thank you very much. Now, that I can see everything, RPR and I can see you face, Congratulations on your recognition, thank you very much for all of your efforts. + +RPR: Well, yeah, thank you so much SHN, that was lovely to receive. + +SHN: You will be receiving something a little bit heavier, but you will have to wait. + +RPR: Yes, this is why I like in-person physical meeting. + +SHN: And may mail it to you, I don’t think you want to travel back with it. Send me a mailing address, we can decide later. It is okay. We have time. Thank you. + +RPR: I tend to avoid taking bricks or anything heavy. + +AKI: Anything that can be described as a weapon. + +RPR: Yes, I did receive my tenure from Bloomberg, a massive heavyweight. Yes. + +SHN: This would be alongside that. We will find the best way to get it to you. + +RPR: Thank you for thinking of me. + +SHN: Thank your committee and your nominators. Are there any questions? + +RPR: On the queue, there are no questions. Any other questions for SHN? No? All right then. Thank you, Samina, for your report. + +### Summary + +SHN provided a brief update since the last TC39 Plenary meeting, supported by AKI, covering recognitions, standards approvals, new projects and members. Four individuals were honored with Ecma Recognition Awards at the recent General Assembly; they were recognized for exceptional work for Ecma. + +The 16th edition of ECMA-262 and 12th edition of ECMA-402 were approved. SHN applauded the subcommittee chairs, editors, and Aki for quality outputs and documentation. + +New work was mentioned with TC56 (AI), where there is an open call for participation; accepting public comments on its draft standard until 15 August. Details and links are available on GitHub or via email. Ecma welcomed six new members including Consensys Software, active in TC39. + +## ECMA-262 Editor's Report + +Presenter: Kevin Gibbons + +* [slides](https://docs.google.com/presentation/d/1SR58Fn8tnOt1Y_OOF-ZrqVCaWbHubbsbXadb45Ndpq8) + +KG: So, a small batch of normative changes. Before I get into this, I want to mention there are a handful of other outstanding normative changes that the editors are aware of, including most significantly explicit resource management, that we haven’t had time to get to. The editors have each individually had a bunch of stuff going on in the last couple of months that has prevented us from spending as much time as we like. For example, SYG has been changing employers, that sort of thing. So we are hoping to get back to our usual pace in the coming months. + +KG: We did land a few normative changes. `Error.isError`. And the change from RKG to finally specify the function assignment on the left-hand side of assignments web-reality issue. This has been a very long time coming. One of the divergences between web reality and spec that no one had fixed until very recently. So thanks there. This last one, someone noticed that `WeakRef.prototype.constructor` was specified to be non-writable. Making it like every other dot constructor property. We’re sure this was a bug. It was writable on the web, In test262 it was writable. And the default, it was writable. We think this was just a bug, because `WeakRef.prototype.constructor` is weird in that it is not required and so it was specified in a different way, we think the fact it was specified in a different way led to the divergence. So we landed that without asking for consensus from committee, since we understood the intention to be that it should have always been writable. Certainly that’s what everyone implemented. + +KG: Not too much to call out in the way of editorial changes. We did want to call out this change to the module loading machinery from NRO which slightly simplifies some of the machinery. There was a field on an internal record that could have just been a local variable. This only affects you if you are looking at the module machinery, but it does affect you if you’re looking at machinery, and the machinery is very complicated. We want to make sure people are aware of that change. And probably forthcoming similar changes along those lines. Hopefully, that will be a simplification, but it is a small amount of change. + +KG: Okay. I’m not going to go through the upcoming work again, except to call out the first item, reducing optional parameters in operations. There are a handful of optional parameters we think are causing more trouble than they are worth. We just completed an audit of all of them and made a list of which optional parameters would better be served by having something explicit. We have slowly been going through and making those changes where appropriate. Yeah. That is all that I have. Thanks very much. + +## Ecma 402 Status Updates + +Presenter: Ujjwal Sharma (USA) + +* [slides](https://notes.igalia.com/p/2MUpJraYd) + +USA: We have 4 changes since the last time we all met. These are the four editorial things that were merged into the specs. Two of them by ABL and two of them by RGN, our editor. So first we have Oxford comma markup, sort of editorial fix that is more like just fixing editorial bugs, that even, you know, anything beyond that. That is how to describe the first one. I can go into either, well any of these, but, yeah, I will just quickly summarize the rest of them + +USA: The next two by RGN are related to the variants strings that we did. So we uncover a few things in the last sort of normative change, if you remember, regarding prototype variants for local Instruments (?), this is not property for the validation of the variants. Getter and then—the checks. And then finally, 1,016 is actually putting us in alignment with the ECMA-262 specs so, you can see, there is a bunch of comments here. One for each of the constructs that we have. You can see it is just taking the information that we already have and converting it into the, an ordered list format that is in 262. Yeah, this is editorial alignment, nothing spectacular or new in any way. Yeah, these were the four things that we got through. So thank you. + +## ECMA-404 Status Updates + +Presenter: Chip Morningstar (CM) + +* no slides + +CM: Not much to report, as usual. It has been a slow week in JSON land. Although, as I was contemplating this report, I was struck by the distinction between ECMA-404 and HTTP 404. + +## Test262 Status Updates + +Presenter: Richard Gibson (RGN) + +* [slides](https://docs.google.com/presentation/d/13A4dp4kHj5aAJLwjehYt2O0rCHiTnh5XTD3_riNngeI/edit?usp=sharing) + +RGN: Okay. Not much has changed in test262 since the last plenary. We have simplified verifyCallableProperty for authors which is part of an ongoing effort to reduce the burden required for boilerplate-type testing. That landed about a month ago. And we also have a little bit more implementation coverage. A couple tests for IteratorClose. A couple for promoting tests from staging, and a couple more Uint8Array fromBase64. Moving forward, we are going to continue increasing affordances for test authors to reduce their overall burden. We’ve got a lot of new tests that still need review. The maintainers have divvied them up, but haven’t yet closed them out. And of course, help is always wanted. We’re on the TC39 calendar every third Wednesday. + +RPR: Okay. There is nothing on the queue. So yeah. RGN has a cry for help on test 262. Please do join him. Thank you, RGN. We shall move on. + +## TG3 (Security) Status Updates + +Presenter: Chris de Almeida (CDA) + +* no slides + +CDA: Hello. Yes. TG3 continues to meet with security impacts of proposals weekly. Please join us if you are interested in this topic. Thank you. + +## TG4 (Source Maps) Status Updates + +Presenter: Nicolò Ribaudo (NRO) + +* [slides](https://docs.google.com/presentation/d/1UH4MRfP8K6cuAHDhayH_PzIPCGgfxtePG5NOPGSfK6A) + +NRO: So there hasn’t been much going on in ECMA-426. There have been like a couple of minor fixes, the fixes to spec, but nothing significant. However there is work in progress from our spec editor to actually define how to use the mappings that our spec define to like, basically you get a list of mappings, a position in a code file and we actually say how those mappings translate position to the corresponding file. That is work in progress. You can check this work in PR #195. + +NRO: There is an update on the scopes proposal. We have spec text under review. Most of the spec text is ready. There is some things that people are handled by the recovery and cases. But it is like really negotiating. And we have implementations in Chrome DevTools and Firefox. We find that for our process it is very useful to have very early implementations, we are talking about things that are very difficult for humans to read the file and understand what is going on without actually trying it in a browser. + +NRO: And we have updates with the range mappings proposal. We have consensus on range mappings allows skipping mappings in some cases we recently changed the proposal to something more efficient when there are a lot of mappings when efficiency matters the most. And in progress for implementation for testing purposes. You can check the link here if you want to see what the new coding is like. + +NRO: And we have also new proposal: Source hash. It is at stage one of the TG4 process. It assigns a unique identifier to a unique hash to original source files that can be used, for example, for caching purposes so dev tools don’t have to reload all of the files every time. Normally, things on the web are cached by URL and HTTP headers, controlled by HTTP headers, but that is not enough for source maps, because development files change quickly while keeping the URL. This is it. Are there any questions? + +## TG5: Experiments in Programming Language Standardization + +Presenter: Mikhail Barash (MBH) + +* [slides](https://docs.google.com/presentation/d/1FUzQY6Wpp6BqJKv2XJFWHcZDqkLftkwHCyW1TnXSOmI/edit?usp=sharing) + +MBH: A short report from TG5. In terms of outreach, YSV and I gave a talk about TC39 and TG5 at the Programming Language Mentoring Workshop, which was held at the PLDI 2025 conference. That’s the premier conference on programming language design and implementation. And also, at the European Conference on Object-Oriented Programming (ECOOP 2025), YSV and I arranged a Workshop on Programming Language Standardization and Specification. We had 10 talks, and two talks that are mentioned here on the slide, given by MF and by Jihyeok Park, a researcher from Korea University in Seoul, were about the tooling used by TC39 and about the ESMeta tool, respectively. + +MBH: TG5 continues to have monthly meetings and I would like to bring everyone’s attention specifically to the next month’s meeting. So, at the end of August, we’ll have researchers from EPFL in Switzerland, talking about their formalization and mechanization of regular expressions semantics. We would like to invite everyone who is interested in that and works on the implementation of regular expressions in engines to attend that meeting. + +MBH: We also continue arranging in-person TC39 workshops, the next workshop will be collocated with the hybrid plenary in Tokyo. So it will be one day before the plenary, on Monday, the 17th of November. We’ll be hosted by Bloomberg. + +MBH: And one more remark. YSV is on leave until July 2026. So the current co-convenors are myself and MF from F5. That’s it. + +RPR: Okay. Thank you, MBH. Are there any questions on TG5? Well— + +CDA: Wait. Sorry, I missed the part about the convenor. MF is now co-convenor of the group? + +MBH: Yes. + +CDA: Okay. I think we formally have to do that through plenary. Right? We’re changing the composition of task group convenors, that has to be—shall we do that now? + +RPR: Given this was a surprise, I guess—does—maybe—ask, does anyone want more time to think about whether MF is okay as convenor for this task group? Okay. And, in which case, I think we could probably just go ahead right now and ask for—oh, go ahead. SMH. + +SMH: Yes, I just want to say, if you have no opposition and if the TG is proposing a convener is accepted by the technical committee as a whole and through the chairs, and you can do in this short notice, you should be fine. + +CDA: I support plus one for MF for joining the conveners group of TG5. + +RPR: Thank you. CDA. Is there anymore messages of support or objection for MF becoming a convener for TG5? Nothing on the queue. So no objections. We have heard only support. So congratulations, MF. Thank you for ensuring we are doing things properly, CDA. All right. I think—anything more you would like to say, MBH? + +MBH: That’s it. Thank you. + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +* no slides + +CDA: Oh, it has been quiet, code of conduct committee front. We have no new reports or anything. The only thing we’ve been discussing a little bit that we haven’t made a lot of time to discuss it within the group is the ongoing issue in the repo dealing with AI and large language model-contributed generative content and stuff like that. So please head to that issue in the repo if you have thoughts or would like to see kind of the current state of what we’ve been discussing. Although, I will note that, I believe, KG has a topic coming up later, too, specifically about this. We will be talking about that shortly. Thank you. As always, if you’re interested in joining the code of conduct committee, please reach out to one of us. Thank you. + +RPR: Always looking for new members. MF, you’re on the queue. + +MF: I just, I would like CDA to state the issue number for the notes so we can be clear which one he is talking about? + +CDA: It is issue 62. I will be pasting it into the delegates chat as well as the notes. [tc39/code-of-conduct#62](https://github.com/tc39/code-of-conduct/issues/62) + +## Preparing Summary and Conclusions + +Presenter: Samina Husain (SHN) + +* [slides](https://github.com/tc39/agendas/blob/main/2025/tc39-2025-030.pdf) + +SHN: thank you. This is a very open discussion. AKI and I put slides together just to help clarify. Over the last little while, there was some discussion on Matrix about the purpose of summary inclusions—what they are, their value. So, we put a few slides together to help that we can all be aligned and agree this would be a way forward that would help both the committee and also the requirements that we have at Ecma and in general make it something that is not cumbersome to work on. AKI is on the call. I’m going to start the conversation. + +SHN: The difference between summaries and conclusion, and this is a very simple slide just to set the stage. So the summary is stated very clearly here. The substance of the presentation and conversation. Just something simple in—sentences which may be bullets, not just sentences that say it has been approved, but define what it is. But simple statements that identify the key factors that have been discussed in your slot of presentation in whatever purpose it may be, whether it is to come to stage one, stage two, or to bring in some other options. So there’s some examples given in the slide bullets. You know, you have a conversation, you bring up the options, you may have in your summary option A and B were discussed. Here is the result of that. Are there other areas in which the delegates raised specific concerns that should be highlighted and should be remembered before the next steps take place or before the next conversations happen? So it is just a short summary of what you’re doing. + +SHN: And the conclusion, of course, is the decision or the commitments that are made as a result of this discussion. So if you do move on to the next stage it can be clarified, if it does go to the next stage or next step it needs some actions, that would be noted. If it goes to the next steps and needs actions and has an individual associated that should also be identified. It should be brief, and clear, so if somebody looks back at the notes they can understand what the conversation may have been, which could have been a 30-minute conversation or whatever time. Two very simple and easy ways to do a summary and a conclusion. + +SHN: Why do we do this and where are they used I should say? So it is important that we have summaries and conclusions. We often talk about this. It is to understand what the decisions were and why they were made. They are used in the Ecma meeting minutes and our formal archive of information, and of course, the technical notes, the notes that are currently generated through the stenographer are quite long and a lot of reading so it really helps, I think, to have this summary and conclusion to let anybody who looks back at the note find what they are looking for quickly in a time-efficient manner. AKI, do you have any other words you want to add on the first two slides definition of summary and conclusions and where we use it? + +AKI: No. I think that covers it pretty well. + +SHN: So, in preparing for it, and to keep it simple for everybody, and also efficient for everybody, as you all prepare your slides for your sections with the areas that you speak, you may also be able to already prepare what your summaries are. You know what you’re going to present. You may embellish a bullet or two or comments or two, but 80% of your summary may be done before you do your presentation because you have the slides. You also know what you want to achieve. So that may help you to also prepare some words for the conclusion because hopefully, that’s the direction the results are going to be. So this can be done in advance, and hopefully, it can be a bit more time, time effective for the meeting that we’re having when we have the plenaries and also for yourselves as you prepare in advance so you don’t have to sort of think about it while we’re doing the meeting and get distracted. So I think it could be a little bit more productive that way. + +AKI: You know, for example, if you go back to slide two. You can see an example of the summary, the first two bullet points were clearly written ahead of time. And then afterwards the third bullet point was added because that happened within the conversation. But it wasn’t necessarily something that involved a commitment was where the discussion went to at the end. So summary being the presentation, the conversation, means you can summarize ahead of time. You may wish to add more to the summary as the conversation goes on. + +SHN: Okay. Thank you, AKI. You know, the idea to do this short presentation was to lay the groundwork so we can agree and the committee would feel comfortable this makes sense for summary and the definition of conclusion works. Since there was a lot of conversation, back and forth for the previous plenary, I would like to just stop now, there are only a few slides I want to present. You all have these slides. Are there questions or concerns, what can we do better or clarify better to enable everybody to be able to do summary and conclusions? + +WH: How are these things going to be reviewed? The nice thing about notes as we have them now is that we can review them and update them in real time. If somebody will be writing summaries based on their interpretation of what the discussions were about during a meeting at some indefinite time after the meeting, then how would those things get reviewed? Reviewing them at the time of minutes approval at the next meeting is kind of too late. + +SHN: Thank you, WH, that is a very good question. I did not want to assume it is not reviewed. The idea is that the individual is prepared. So when you have, as you do at the end of your sessions, and somebody says do you have statements to make for a summary and conclusion. That individuals are prepared. They have the written or stated. And the stenographer will write it and can be seen by everybody, it is done at the time, it is not intended they would not do it without the visibility of all of the members, just the preparation may be done in advance. + +WH: I’m a bit confused about the preparation. Presenters can prepare summaries based on what they are presenting, but it is hard to prepare ahead-of-time summaries based on what direction a discussion might take during a meeting. + +SHN: Yes. Agreed. And I think that can often happen. The idea is to be able to give you, to give the individuals, just a framework to work with so it may help, it may help ease into the conversation and write the words. If you feel it can be done differently and could be more efficient, I’m very happy for some input and feedback. This is just some thinking that, together AKI and I thought it may help. But I’m very open for other thoughts. It should be simple for everybody. That’s all. + +WH: Given what a lot of our employers are working on, it ought to be fairly simple for anybody who wants to see summaries to just run one of the AI summarizers over the detailed notes. Why is that not a solution? + +SHN: So, we have an open conversation about which AI tools and what to do with the tools. Again, if you’re going to write and—as we recommend you write your summaries when you finish your presentation or come to an agreement together and then you put it into the notes—if you want to use an AI tool, I think that also depends if it brings the correct message. + +AKI: Yeah. So hopefully, this is all contemporaneous, hopefully, this is all happening as we go. You write your summary ahead of time of your presentation. When you’re done with your presentation, you go into the notes and add your summary. If there are more bullet points that are appropriate, because the conversation went somewhere you were not prepared for, you add those and your conclusion like you always do. The suggestion is just to have something readily, because you know where you’re starting. + +AKI: As far as AI tools, AI tools get standards meetings wrong consistently in my experience. I’m not super excited about trying to rely on that. Not to mention, ISO says they don’t belong in minutes or that kind of thing. So but more importantly, this is something you should have already that you can add to. So it is contemporaneous, the idea is not somebody coming in later and then deciding. + +WH: Okay. That realtime aspect of it resolves my primary concern. Thank you. + +RPR: Okay. Yeah, I was going to say, I think we confirmed it with WH, but we’re talking here about the in-line summaries and conclusions that already go into the notes. So there’s no question of any review process being bypassed. And the point I was going to add, is that if we did not do this, if we did not own the message, and have the most, the most expert people who were in the conversation define the summary, then we are missing the opportunity for them to, to instill that level of accuracy and review, because if our notes our missing the summaries I fully believe that other people will rely on all sorts of different nondeterministic tools to make guesses that will often be wrong. + +SHN: I think the use of AI tools to be able to do summaries is still a very big discussion. I think it is up later on. And we’ll have to look at it very carefully. AKI mentioned the ISO rules. And a few other rules are coming out. Perhaps for the next plenary I will bring a clear indication of what is ideal there. + +WH: I did not mean using an AI to publish official summaries. What I meant, as an alternative, was that anyone on their own could feed our detailed notes into an AI and ask the AI questions and see where the AI points them to. So this would not be an official TC39 activity, but people could do this on their own just to help themselves read the notes. + +AKI: So, as SHN mentioned, not only are the summaries and conclusions there for people reviewing the technical notes, they are also used in the official minutes that get published. + +SHN: So, WH it is not just going back and helping you find the summary or define it, but we already have to have it prepared and archived just through our rules and bylaws. + +WH: Yeah, sounds good. I have no objections to that. + +SHN: So, thank you, WH, thank you for raising those points and helping us converse this through. What is important, we’re having the particular discussion at the start of our meeting. If we can take the example or the direction of the summary and conclusions as defined in these slides and use it as this meeting continues over the next days, I would be grateful to have more feedback or just see that it works like that or if doesn’t what else we can do to help make it more doable or explain it again. + +RPR: I just wanted to highlight MF suggested that he will be making a practice of writing a summary ahead of time as perhaps the final slide in a presentation. And at least this seems like a very good way, you know, of, it is an obvious place where people can remember to do this ahead of time. + +SHN: Yes. Thank you, MF. I think that is a great idea. That is a great way to do it. And then you massage it as-needed. Thanks. Of course, it is important that you, that you repeat that message so the stenographer can make sure to capture it and it in the notes, so that’s where we do need it. So thanks. Any other comments? + +RPR: The queue is empty. + +SHN: Great. Thank you. Thank you, RPR. And as the days go on, if there are questions or somebody just wants to touch on this topic while they’re doing their summaries and conclusions, just ping AKI and any for any clarification. + +RPR: Thank you. Yes. So yeah, we’ll, we will try to be diligent in this meeting making sure these get recorded. I would also say to everyone here, when we get to the point of the meeting of, of verbalizing a summary or conclusion, that is the best time to jump in. Either by updating the notes or by, if you need to, you know, normally that’s a period of silence or, to the end, you can verbally join in, because resolving these on-the-fly whilst we have all of this context in our head and it is just happening. That’s the easiest time to do this. As time goes on, chasing people to do this becomes harder and harder. Thank you. All right. I think this has been an excellent conservation. + +### Speaker's Summary of Key Points + +SHN and AKI, addressed the need for clear and consistent use of summary and conclusion in TC39 meeting documentation. The objective was to establish a practical approach for capturing key discussion points (summaries) and resulting decisions or actions (conclusions) in real-time. It was clarified that summaries should succinctly reflect the substance of a discussion or presentation, while conclusions should document outcomes, actions, and responsible parties. Presenters are encouraged to prepare drafts of these statements in advance based on their presentation slides, with the understanding that they can be updated live as the discussion evolves. + +Questions were raised about review processes and the reliability of AI-generated summaries. It was clarified that summaries and conclusions ideally should be written and reviewed during meetings and included in the live meeting notes, thus ensuring accuracy and transparency. Preparing a draft summary ahead of time (e.g., as the final slide in a presentation) was suggested as a best practice. The discussion emphasized that this practice supports Ecma’s documentation standards and helps future readers quickly understand past discussions without parsing lengthy transcripts. + +### Conclusion + +There was agreement to try this framework during the ongoing TC39 meeting, with feedback invited to refine the process. Attendees were reminded to clearly state their summaries and conclusions during sessions so stenographers can capture them for the official record. The practice of drafting summary/conclusion slides in advance was endorsed, with the flexibility to revise live as needed. This initiative is expected to improve clarity, accountability, and efficiency in TC39 documentation. + +A Summary: Distills the substance of the presentation and the conversation into easy-to follow bullet points. + +A Conclusion: Records any decisions or commitments made in the course of, or at the end of, the conversation. + +## `F.p.toString` incompat for builtin accessors + +Presenter: Keith Miller (KM) + +* [proposal](https://github.com/tc39/ecma262/issues/3652) +* no slides + +KM: (presenting issue #3652) So I was going to withdraw this because the, I guess, should I give the context? I guess we should start with the context. Some sense that PR1948 the, we had originally believed that the expectation was that all two strings of getters should have a get prefix on them. We did not implement this in WebKit JSC. We made a PR to do that, it broke some sites. Since then we found that actually the get/set in the toStrings was optional. However, to add another fun wrinkle to that, it depending how in our internal implementation the getter is built for things we may or may not add the get prefix. My understanding of optional things in the spec you must pick one side of the optional and always take that. If you scroll down you can see the comment from MF on this. + +KM: And so I guess the question is: Should—we’re also unclear if we can switch the one that have a get to not use a get for the tunnel—internal, keep going down a bit further. The last comment from MF. Up a little bit, up a little bit. Yeah. + +KM: So my understanding of optional is that you have to pick one way or the other, it is not implementation-defined where you can pick any side any time you want. And so if there is a question of should we loosen this or like we’re unclear that we will be able to remove the get from the cases that do use it and we certainly, it appears we cannot add get to the missing cases. + +KM: And then then there’s also another question that was bought in the thread we should discuss for polyfilling, sense this makes their life harder. We don’t necessarily have to discuss this now. + +MM: I have a question. I’m already on the queue. So I didn’t understand what is optional. What are the two choices? + +KM: On the name property, I guess I will just clarify again. The name property on a getter function object has to have to get prefix. That is not optional. That’s what everybody implements as far as I’m aware. The—when you two string a function, a built-in function that’s like part of provided by us or somewhere in, like W3C or some other spec, when the—they, when you two string them, they can be function and then have a get in there before the name that was originally there for the spec name. Or they cannot. And that get is optional in the spec to the intrinsic spot that adds the name properly. But we wishy-washy to it depends on implementation details. Our concern, we strongly, we know it is not possible to at the get to the ones that are missing it. But we haven’t done an experiment to remove them from the ones that do. And we’re worried that’s also going to be web incompatible. + +MM: So, if something is web incompatible then the other browsers should run into it as well. What do the other browsers do? And are they experiencing the same incompatibility? + +KM: I’m fairly sure, maybe someone else can correct me if I'm wrong, I’m pretty sure other browsers always put the get. The problems the sites in question don’t do behavioral testing, they do if safari expect this behavior. + +MM: Oh. Oh. That's terrible + +KM: It is the worst form of, of checking which is check the browser and then except the correct Behavior. + +MM: Oh. Do you know, do you know why they are doing that? + +KM: We don’t. We haven’t done a huge amount of investigation into why. The sites—we ran into one site that ran into which is like a New York State tax form and we were able to reach out to them and get them to fix their behavior to be behavioral rather than browser. And but the, we then later after we ended it, we ran into issues with the Russian consulate websites, we don’t know how many more we will run into that do that. We’re taking a fool me once shame on you, full me twice shame on me, and decided maybe it will not be compatible. + +MM: Okay. So if there—so it might very well be if they are switching on do I seem to be on safari, then any change we make to the specs will be incompatible with the other side of that conditional? + +KM: Correct. So that’s—the question is whether we should change the option, sorry, the question I’m proposing, I suppose, should we change that optional to an implementation defined at the get to the toString? Rather than an optional, which I believe, if I understand correctly, is an implementation must pick one way they are going with that optional any time that that abstract operation is executed and they cannot vary it. But perhaps I’m wrong on that. + +MM: What specific built in accessor did you, what, first of all, what, was there—did you encounter this on more than one built-in accessor? + +KM: I don’t know. I think they just bisected the change. + +MM: Okay. Do you know which built in accessor you were running into? + +KM: We don’t. Because no one did the work to implement it specifically. It’s unclear offhand. There’s many, many, many. So— + +MM: Okay. So I will just state—my— + +KM: Probably like a 50/50 split on the number of built-ins that have the get today and ones that don’t have the get today. + +MM: I think we need more information before we as a committee can come to a conclusion on this. + +KM: Okay. That’s a fair point. I guess the—well, I guess what information? Do you want to know specifically what accessor is the one in question? Because it could be multiple. It could be that like they take a handful of them and do some kind of attempt at security by checking then per browser, I guess I don’t know what information to come back to the committee with at this point. That would meaningfully effect our decision? + +MM: Yeah, I mean, with, for example, non-extensible applies to private, you know, SYG, Google came to the committee with all sorts of impairment investigation to give us a very concrete sense of what the nature of the incompatibility is. And the way the committee dealt with web incompatibilities definitely depended on the concrete nature of the incompatibility they ran into. The fact these were conditional on the website was doing something conditional on Safari is really fascinating. I’m glad you led with that. If it’s—let me see if I understand your proposal. Your proposal is basically to leave the choice optional, but to have the optionality be finer grain. It doesn’t have to go, the same way for all built in accessors. You allow each implementation to choose which ones have the get, the get or set prefix and which ones do not. On their own basis. Or, or conceivably, I would guess, even on the basis of, doing—well, I’m, I shouldn’t read, I shouldn’t read more than you said. So, so yeah, I would just— + +KM: It is possible, I would have to reread the spec to be 100% sure, it is possibly we still wouldn’t be meeting the spec if all built in getters did not have get, but JS created ones did have get, because that would be splitting on that optional. But even, yes, but beyond that, I’m still even asking for like, win, if we decided even if that split, is it okay, to have the split within ones provided to have the get or not. + +MM: Yeah, that’s what I was trying to, I was trying to restate what I take you to be proposing. And that was what I took you to be proposing. That each implementation gets to choose which particular accessor has it, rather than to have a single optional choice that is blanket across all built in accessors. Am I getting your proposal correct? + +KM: That’s right. Yeah. Yeah. But I think even if we just did the first version, where built-ins versus not built-ins, that would still be a spec change then what it is today. + +RPR: We do have plenty other people on the queue to talk about this. + +JHD: Yeah. I was just adding some color to what MM was suggesting. If there is a less consistent form of optionality that would address the Safari incompatibility. We could put that in but mark it as legacy to discourage people from doing that. So that the indication would be this would allow Safari to avoid a willful violation or any other web engine in the same boat, but encourage people to implement things in a consistent manner as well. + +KM: Another option, a proposal would be to enumerate ones all in the state today. And have an annex B exception for those getters that like collectively all the parties that are interested in adding their legacy ones, too. To say that those are exempt from this requirement. + +MF: So my reading of this, this step that says to do it optionally is that you can make this choice every time that you evaluate this AO. I do see how you can read it that the implementation has to make the same decision every time. We can clarify that editorially. The intent here is that you can make the choice independently every time. The way we do it with like legacy features that are optional, you do not get the choice every time—the choice is made once. Though those are presented in a different way than this step is. So, I’m happy to do something editorially here to clarify that. + +KM: I see. The reason why I assumed it is not that way. Perhaps I’m wrong. But the valid implementation would flip the coin every time you execute this statement in the spec and then pick. That would still be valid? Which seems like it would be undesirable, which is why I assumed it was not that way. But perhaps I misread it. + +MF: I think that is permitted. + +KM: Okay. That is good to know. + +DLM: Yeah, just to talk a little bit about the status in SpiderMonkey. We have a file, but we never did implementation work here. I believe since it is optional and I don’t think we’re setting it in with the prefix name anyway, it might already be compatible with this, but to understand a little bit further, but to say—maybe thanks to safari for investigating this. It is never great to break the web for your users. I understand the desire for empirical evidence, but our users do have to take priority over gathering evidence. + +KM: To be clear it is not easy to generate (?) when possible. We didn’t find out for six months after the code change landed. We didn’t just haven’t shipped yet, unfortunately, but to clarify, the, the—what do you know, what—you guys do today? Do you do get on the JS, like user provided betters? Or is it just built in ones? + +DLM: I’m not sure. I have to investigate that. + +KM: Okay. I was just curious for my own edification. Thank you. + +MM: Yeah. So I agree with MF's interpretation of what the language meant. When we were deciding on this, the point, the dynamic coin toss scenario, have very much agree with KM, that would be an unpleasant thing for anybody to do. So as we clarify this, I would not like language that suggested that a dynamic nondeterministic choice were a reasonable interpretation of this. And maybe even, I would, you know, if that becomes a practical issue, which I doubt it will, but if it were to become a practical issue, I would like to come back to the committee asking that there’s some way to state that it is a static choice, not a dynamic choice. So it doesn’t introduce runtime nondeterminism. Like for any given platform these choices are fixed for, you know, once a program starts running. Something like that. + +KM: Yeah, you can certainly say it is fixed for a given set of inputs also. That would may be an easy way to say it. To allow it—is that accurate, MM? + +MM: Except that I don’t know what inputs mean in that case. + +KM: Like in this set function name here, right? For a given function name and prefix, the optionality is not, should be the same every time. + +MM: Should be the same every time for a given implementation. + +KM: Right. Correct. + +MM: Right. Yes. I would, I—since we’re talking about revising the spec at least to clarify, I would like to revise the spec to be specific on that. Introducing runtime nondeterminism is something I want to stay very clearly away from. But with, with, with that caveat, and with MF's interpretation, if we make that, if we make that, you know, piecemeal, or add, I’m not sure what the right terminology is, fine-grained optionality, not a blanket choice. If we make that clear, does that address, KM, all of your concerns. + +KM: I believe yeah. If the definition of optionality is like for a given set of enough inputs to a set operation, the optionality is defined. It is not by you have to pick for 10.2.9 5.a.i that you always have to go one way or the other. + +MM: Right. Right. + +KM: That would work. + +MM: Good. Good. The reason why I agree about how to interpret even the current spec language is that we’ve introduced this term normative option and we have gathered together the things that are normative optional, which are the ones where you either do it or you don’t. + +KM: Right. Right. That makes sense to me. + +RPR: So if you want to summarize the—you’re, we, we have a little bit flexibility if you want to, open up any discussion. But— + +### Speaker's Summary of Key Points + +There was some confusion as to whether optional was picked once for a given line in the spec. There was some discussion around what normative optional meant and there seemed to be agreement around the idea that optional should be idempotent on the abstract operation’s inputs. + +### Conclusion + +The resolution reached was that was not the intended meaning of normative optional. Given this, there’s no change to the spec needed to accommodate jsc’s behavior here. Additionally, we may want to revisit this in the future and enumerate all existing legacy getters and exempt them in an Annex B addition. + +RPR: And MM says good, I withdraw my ask for more information. + +KM: Great. All right. Thanks, everybody. + +## Spec/implementations divergence on module evaluation promises settlement order + +Presenter: Nicolò Ribaudo (NRO) + +* [slides](https://docs.google.com/presentation/d/1g3JGIazNuA1Tuk35t_M4qxJD8ajNsYiTcWpTrXQnEns) + +NRO: Okay. So yeah. I was—specs to for some uncovered module stuff. And I found the case where most implementations don’t actually match what the spec requires. And actually, the true browsers have this behavior. And two cases here. So here I’m talking about the promises you get when you dynamically import a module. The module has some wait, at some point, the module imported is not executing and the promise settles. The three major browsers have different behaviors and different orders for the fulfilment and rejection case. + +NRO: I have this case here. You can check in the repo. There are two more test cases. However, they rely, two additional cases rely on—they are not defined by 262. They are host. So for the purposes of this presentation I’m only going to go through one of them that is under our control. + +NRO: This is it. So I have two modules. A and B. A imports B and B has to await some external module. First dynamic import of B. It goes to the host. At some point the host will, will return 6-2 with the module. There will be some back and forth between the host and the 62 to log off the dependencies, if any. And then the module will start evaluating. We can, there are ways within 62 to like tell when exactly a module starts evaluating through looking at dependencies. After B starts evaluating, we do a second dynamic import of A. Again there will be some back and forth between the host and 62. We can tell when A starts evaluating. + +NRO: What’s important here is that Q1 was not resolved, was not settled yet. When we started dependent A. So at this point B is waiting on P1 to settle. And A is also waiting on the same. Because A was going through the dependencies and sees that one of the dependencies is currently pending. And then we settle B1. So either we fulfill it or reject it. When that happens, well if we fulfill P1, then B will finish evaluating, and move to A. In this case A there is actually no code. So A will actually fulfill. And if we reject P1, immediately both A and B will be rejected, because you cannot try/catch an import, so there is no further evaluation happening in A. But in which order—the—the test case here is checking which order the two promises are settled. + +NRO: So this was spec. In case of a fulfillment. So when everything goes well, first the promise fulfills, B finishes executing, we call this a single model execution fulfilled A and B. The first thing this does with implementation B is to fulfill the promise relative to B. Then step 12, we recurse, through the module depending on B, so in this case it is A. And execute them. And these execution will cause a model fulfilled and resolve the premises corresponding to these models. In the rejection case, we do the opposite. So first in step 9, we go through the parents and records and then after recurse in step 10, in B, we reject the promise corresponding to the module. It is happening the other way around. + +NRO: I tested different platforms, both browsers, server runtimes, and the CLI of various engines. And these are the results we have. So in the test case, as explained with the difference between the fulfilment and rejection case, and P1 is fulfill, ECMA requires first we fulfill the promise correspondence to B and then A. And P1 is rejected. It is first P1 and then PB. So the and the SpiderMonkey match the spec behavior. Chrome, Deno, and XS—in fulfillment rejection case, first settle the promise relative to B and then A. And Safari, Bun, and JSC, do the opposite. They always settle the promise relative to A and then the promise relative to B. So we have the case in which, well, everybody is doing something different. So my opinion here is that will be, well it makes sense the current fulfillment behavior, because evaluation, like continues bottom up. So when the bottom module is done evaluating the parent starts evaluating. So it just makes sense that fulfillment follows the same direction. + +NRO: For errors, the errors propagating at the same time across the whole graph there is no observable user code between the module errors and the parent receiving the error. So either order is fine, however it would be great if fulfillment rejection behaved the same way. So I have a slight preference for changing the rejection behavior, to first always do the child and then the parent. And the proposed change updates the spec to this. And this is updating the spec to do what Chrome, Deno, and XS already do. And like the—technically, the way this would happen is just swap the two steps in the sync model rejection rejected field. + +NRO: There is alternative approach. I added this slide, just a couple of days ago after talking with RPR. He pointed out that actually what we do here doesn’t really matter, because it is like, it is very difficult to actually write a test case where the order is guaranteed by 262. And it is very much tied to IO and modules and host dependent behavior. So it is very difficult for anybody to actually rely on this order. So we could also update the spec to allow this promises to settle in any order. But in success, in the case, which basically mean accepting that all browsers are behaving differently and just say so in the spec. With some of the option awarding it was—described in the presentation award (?), or maybe in this case it would actually toss a coin every time. So yeah so after the distraction. Whatever behavior, we decide to do, I will make sure that we have do a test for this and maybe a test, because it was on the table before, there is like a lot of divergence between what engines do and what browsers that embed the engines. Again, because this behavior, it is entirely to 262, it is very much at the edge between the host and the host spec and 262. And so the exact distinction in actually implementations might be like exactly the PI layer and in a slightly different place. + +NRO: If we decide that we want to align our browsers on behavior, then please make sure to like actually update your browser to do it. And I will open a request for the spec. I am ready to do that if we decide to, well, to either keep the current behavior or to move to deterministic behavior proposed. If we want nondeterministic behavior I need to do more research how we would write the spec for that. So yeah. This is it. Is there anything in the queue? + +DLM: For obvious reasons I would say that we have a preference for keeping the spec as is, since we’re already compliant with it. It doesn’t sound like it would hit web compatibility problems, every browser is doing something different here. But my weak preference would be to keep things as is. + +MM: Yeah, I just—I would really, I would object to anything nondeterministic or introduce optionality here. My preference the same as NRO’s, which is the bottom up order. But I can see from the table, why it might be harder to get implementer consensus on that. I’m okay with the status quo which is far as the consensus is concerned which creates other implementer burden. That’s it. + +OFR: I just wanted to add a data point. I looked into this in Chrome. And actually it is not that we got the spec wrong, but there is a specific bug that causes this behavior. So I want, so the intention was to follow the spec and given the nature of the bug I wouldn’t be surprised if you can write another test case, where actually the resolution follows the spec. So I would not bet that we’re always doing this thing. We could fix it on our side actually and follow what the spec says. + +NRO: Yeah. Actually I had two other test cases not included in the presentation, because they rely on HTML defined behavior. And I find cases that chrome does actually does not follow the spec but in a different way. It is possible depending on the exact order of things going on, Chrome has different behaviors. But yeah, skip this slide here, because it relies on HTML behavior. So okay. So it sounds like the nondeterministic approach is not going to go anywhere. I heard from Firefox, so from Chrome point of view, it sounds like you would also prefer the current behavior because it is what you actually implement except for the bug, is that correct? + +OFR: I don’t think we have a preference in that sense. I just wanted to say like—the behavior you observed don’t take this as we’re always doing it this way and thus deduce it would be easy to switch to the other way of evaluation. That’s all I wanted. We don’t have a strong position either way. + +NRO: Okay. So—given that there are no strong positions either way. I actually had a slide for a temp check here about doing the change where we align the rejection and the fulfillment order. I would like to recheck. If the results are overwhelmingly positive, like say 80% in favor of changes I would ask consensus to change. Otherwise I would be happy with this, it is less effort, or one browser landed to please keep this. + +CDA: Okay. So, we do temp checks we have kind of like the hardcoded values of what each reaction is supposed to mean. Do you want to take a look at those real quick and see if those are suitable. If not you can redefine. + +NRO: Yeah, I think they are fine. Given here I’m not trying to do some sort of vote. Just let’s look at the results and then we will see what the consensus with the description that is already on the polls. + +CDA: Just a reminder. Unless, unless they fixed this in the new fork of TCQ. Which I don’t think they have, if you don’t have it open already, and then open it you will not see the temp check. So does anyone need to load TCQ to chime in on this that doesn’t already have it loaded? I’m not hearing anything. All right. I guess we will give it a minute for people to— + +EAO: Sorry. Could you quickly clarify what “positive” means in this case? + +NRO: If I see many positive things I will ask consensus for changing the spec to basically do what I + +EAO: Got it. So positive is for changing the spec. + +![Temperature Check | Strong Positive ❤️: 7, Positive 👍: 6, Following 👀: 2, Confused ❓: 0, Indifferent 🤷: 2, Unconvinced 😕: 1][image2] + +NRO: Okay. Okay. So it seems like the result is somewhat strongly positive or positive—I will now ask for a consensus on the change. But I’m very happy, if you don’t have anything and not finding anything that is more complex than other things we can then come back to this in the future. So yeah. Do we have—consensus on, let me find the slide with the spec change. On swapping the two steps nine and 10 here, the model rejected to align the rejection order with the fulfillment order? I’ll, I see that the—the poll is gone. Did anyone take the screen shot for the notes. + +CDA: Yes, yes, that’s been pasted in the notes as well. MM has a +1. Nothing else on the Queue. + +NRO: Okay. Thank you. + +CDA: There’s also, we have +1 from DJM. And +1 from WH. + +NRO: Okay. Thank you. I will 262 and come up with a request. So the clarify the request is not open yet, but I will not come back for consensus after opening the request. + +RPR: Do you want to summarize? + +### Speaker's Summary of Key Points + +The three browsers have three different behaviors when it comes to the order of a sync model relation promise settlement. This pack has two different orders for fulfilment and rejection. And I presented two options, one is to just keep the spec as is. One is to update the spec to change the rejection odor to match the existing fulfillment order. And the third one is to just update the spec to allow both behaviors. + +### Conclusion + +We’ve heard strong pushback to the third option. And given that SpiderMonkey has the current spec behavior, they would prefer not to change. But they would be okay with it, if it committee as a whole decides to. We had a temperature check with large support for the second approach. So for live rejection odor to match the existing fulfillment order. And for the conclusion that we have consensus on aligning the rejection order with the fulfillment order. + +## TypedArray copyWithin inconsistently responds to midstream shrinking + +Presenter: Richard Gibson (RGN) + +* [proposal](https://github.com/tc39/ecma262/pull/3619) +* no slides + +RGN: (presenting copyWithin spec text) I’m going to walk through first the spec, and then describe the problem, and then the proposed solution. It pertains to TypedArray copyWithin which to refresh everyone’s memory is designed for copying a range of elements from a TypedArray to another index in the same TypedArray by means of manipulating its backing `ArrayBuffer`. We’ve got an algorithm that starts on step three by capturing the current length. And then processing arguments, which as you can see might run user code, including a resize of the array buffer backing this TypedArray. And then using the initially-read length to clamp the count of values to be copied based on the requested range and the earlier-captured length. On step 17, when that clamped count is greater than zero, then we enter the block that actually does work. It is independent of any further user code. But because the preceding steps might have modified length, step 17.e updates length based on the current state of the backing buffer. And then the copying itself follows. We get the bufferByteLimit (index in the buffer corresponding with the end of the TypedArray) based on the current length. Then fromByteIndex is derived from startIndex, countBytes is derived from count, and steps 17.l and 17.m here set the value of direction, where 17.m defaults it to 1 for forward iteration for copying. Except in the case where fromIndex is less than toIndex and toIndex is less than fromIndex plus count—the scenario where we’re copying a range forward such that it overlaps itself. And in that case, we have to be very careful to avoid having the byte-by-byte copying of step 17.n pick up values previously written by the loop itself, rather than from the original contents. We want to copy in such a way that we’re not going to get any feedback from prior copying. And because it is specified byte-by-byte, the best way to do that, and the way the spec chooses to do that, is this direction value. So ordinary direction will be 1 and copying forward. But in this overlap case, direction is -1 and copying backward after updating both fromByteIndex and toByteIndex to reference the end of the respective “from” and “to” ranges. And then in step 17.n as long as countBytes is nonzero, we attempt a single-byte copy. But when either fromByteIndex or toByteIndex are not less than bufferByteLimit, the loop is broken by setting countBytes to 0—and remember that step 17.h defined bufferByteLimit from the current length. So when we are iterating forward, such a break happens only after copying the longest possible prefix. But when the direction is negative, we actually encounter the condition in the very first iteration and copy nothing at all. That means for any implementation conforming with this spec, there will be an observable difference between copying forward vs copying backward in those scenarios where user code shrinks the array buffer into either the source or destination range. + +RGN: And now, moving forward to the reported issue #3618, we see precisely that, but only in one implementation. So what we’ve got here is precisely that midstream resize, where we call copyWithin with an end value whose valueOf method shrinks the backing buffer. Forward iteration always agrees with the spec, taking the largest prefix that remains in both ranges. + +RGN: Copying by reverse iteration, web implementations still take the largest prefix that remains in both ranges, but the spec calls for aborting entirely when that prefix is incomplete, and LibJS stands alone in conforming to that behavior. + +RGN: I believe that the dominant implementation reality is the better behavior, that the detail of whether iteration is forward or reverse shouldn't result in observable effects. So the pull request that I have up, #3619, specifies precisely that. After getting the new length of the array buffer when we’re committed to doing some copying, we reclamp the count value based on that updated length. And then rather than breaking out of iteration when we’re at or beyond bufferByteLimit, we instead trust that count value and always continue until it reaches zero. The reclamping ensures that the ranges fully exist in the array buffer, and no code can execute in this block so it can’t change out from underneath us again. Essentially, we use that knowledge to determine that longest prefix, initialize everything from that, and then enter the loop with identical behavior regardless of whether we are moving forward or backward. With that, I’m ready to hit the queue and then request consensus on the change itself. + +MM: I wish I noticed this earlier. For normal `ArrayBuffer`s, I like your analysis and agree with everything. My question is: does this issue arise when you have a typed array on a shared array buffer? + +RGN: A shared array buffer can only grow. It can never shrink. + +MM: It can never shrink. Okay. Can there be, the issue about what is observable. Right? I mean, algorithms that an implementation might choose to implement, that they are observably equivalent on an array buffer and they accomplish our intention here, might be observably nonequivalent on shared array buffers exactly because of this issue that you mentioned about no user code can run between this step and that step. But with the shared array buffer, the arbitrary user code in the other agent can between anything. + +RGN: Yeah. Absolutely. And I believe that the spec relating to shared array buffers is clear about their nondeterminism. + +MM: Okay. So you don’t think there is an issue that this normative change needs to worry about? + +RGN: Oh, right. Yeah, with respect to this algorithm, the details of shared array buffer nondeterminism are encompassed within GetValueFromBuffer and SetValueInBuffer. Which I don’t think needs to change because this is still specified byte-by-byte. But, the sequentiality of operations is not guaranteed for shared array buffers. + +MM: That’s right. Yes. Okay. That, I’m glad you pointed that out. That is the part I was missing. Okay. Great. I’m in favor. + +WH: Just to reply to MM’s point, there is no change in operations in the successful case here—there is no change to the order in which it does things. + +MM: Okay. So the effects on the buffer is the same. That only difference is the , you know, the bookkeeping variables in the algorithm are private to each agent? + +WH: The normative change is that the current spec gives up in some cases, and we don’t want to have it give up. If it doesn’t give up, there’s no change in the order in which it mutates the array buffer. + +RGN: Right. + +MM: Okay. Great. + +RPR: On the queue. We have support from DLM. And KM. Keith, did you want to, and OFR. + +KM: I’m sorry, I should have put EOM in mine. + +RPR: Yeah. And then from KM. All right. Lots of support. Are there any objections to making, for this normative PI? No? There are no objections. So congratulations, RGN. You have consensus. + +RGN: Thank you. Just a final piece of commentary, I did not get a chance to verify test262 coverage before the meeting, but I will do so as a follow-up to this and make sure that we are protected against regressions. + +RPR: Okay. Thank you. Would you like to briefly summarize then what we have gone through? + +### Speaker's Summary of Key Points + +RGN: We reviewed issue #3618 in which the implementation detail of TypedArray copyWithin iteration direction was required to be observable by the spec, but the supermajority of implementations use different and preferable behavior. + +### Conclusion + +RGN: We agreed to the normative change of #3619 that establishes copying the maximal prefix requested for an `ArrayBuffer` copyWithin regardless of iteration direction. + +## Missing name property for `%IntlSegmentsPrototype%[%Symbol.iterator%]` + +Presenter: Ujjwal Sharma (USA) + +* [PR](https://github.com/tc39/ecma402/pull/1015) +* [slides](https://notes.igalia.com/p/Nouz41CnU) + +USA: Hello. I’m here again to talk to you about ecma402 PR #1015. This is the new normative PR we wanted to bring to you. It's the most straightforward one I presented to the committee. Here it goes. The PR’s name is—while quite a mouthful, explains pretty much everything it does. It is adding missing name property for `Intl.Segmenter.prototype[Symbol.iterator]` object. This is the prototype object for the segment you get from dual dot segmenter and symbol iterator. Like it is an iterator. + +USA: Basically, this is a single line change. However normative. It says the value of the name property of this is `Symbol.Iterator` name traitor. This is the sort of name that the changes. One thing to note it doesn’t change the web reality, but the web reality is the spec reality as SFC says. So like yeah. The name is Symbol iterator. And that is it for this PR. So let’s see if there’s any comments whatsoever. And if not, then I’d like to ask for consensus on this change. + +CDA: DLM is on the queue with support. + +USA: Thank you, DLM. + +CDA: Do we have any other feedback for, or support for this change? We generally like to see at least two voices of explicit support for normative changes, plus one from DJM. + +USA: Thank you, DJM. + +CDA: Are there any objections or dissenting opinions? Not seeing anything. All right! We do have consensus. + +### Summary + +Due to oversight, we are missing, we were missing a name property for `Intl.Segments.prototype[Symbol.iterator]`. It doesn’t break, because of the code block, but you know, what’s it supposed to be. + +### Conclusion + +We achieved consensus to add this property to match web reality if this is a good enough conclusion for us, the summary, this, maybe adds to that fact that we got two explicit votes of approval. And with that, we have consensus. Thank you. Everyone. + +## Freezing the Array Iterator + +Presenter: Kevin Gibbons (KG) + +* [slides](https://docs.google.com/presentation/d/1SHqRJDEujG3OVOxs9c3CEnXpHdkzyVrz9qUFyaXrQWA/) + +KG Okay. Let me find the tab. Yes. All right. So, something slightly different. This is not quite the same sort of proposal as we usually get. It is a proposal for a normative change to the language, but it is not exactly what I would call a new feature. And I’m hoping, so I’m going to propose a specific change, but I’m hoping it is also sort of opening up slight reconsideration about how be handle same kinds of features in the future. + +KG: So my thesis statement, which I suspect everyone here can agree on, is that it is currently hard to reason about JavaScript, in large part because users can mess with built-ins. In fact, for my code and the code of many people I’m familiar with, we just punt. You don’t get to punt for an engine, but writing any other static analysis tool it is fairly common to say I’m going to assume that no one has replaced `Array.prototype.map` or no one has put a numeric getter on `Array.prototype` or whatever. Once you try to account for that kind of thing, insanity results. Of course, if you’re an engine, you don’t get to do that. So insanity results. Engines have a lot of code that is only for dealing with edge cases and sort of ludicrous situations that no well-behaved code or reasonable code would ever run into. My favorite example of this is Safari’s HaveABadTime. When things are sufficiently messed up with users messing with built-ins, the engine is having a bad time and various optimizations are disabled. We can’t in general do anything about this. The fact that users can replace built-ins is a core part of the language and necessary for polyfills and it is nice to rely on in some cases. However, I don’t think it is always the right trade-off for things to be mutable. It is a reasonable default for things to be mutable. But I think we should reconsider some specific cases where that trade-off comes out the other way. + +KG: And I think the place that I would like to start to propose for the committee’s consideration is, what if we made it so that `Array.prototype[Symbol.iterator]` was not replaceable. And similarly, we made `ArrayIterator.next` not replaceable. I think that this would have the result of not breaking much of anything. There might be some web combat issues I will get to in a moment. But I don’t think that any reasonable code would be broken by this change. And in particular, I think basically every user of the language assumes it is frozen already, if you’re not one of the JS engine developers or someone I mentioned earlier. If you are trying to reason about some code, you are going to assume the array iterator is frozen, and spreading an array or destructuring an array does anything other than the default behavior. So I think this change would be pretty much totally transparent to users, or in fact bring the language in better alignment with how users think about it. And it would make a lot of code a lot easier to optimize. The example that I have at the bottom of the slide, one of my favorites that at parse time, if you assume that sorry, `Array.prototype[Symbol.iterator]` is not replaceable and `ArrayIterator.next` is not replaceable, this snippet here (`…(foo ?? [])`) can at parse time be replaced with an operation like "spread if not null". You never need to actually manifest the array, even at the baseline tier, because you know that the only thing this could possibly be getting is the actual `Array.prototype[Symbol.iterator]` and the actual array iterator next. You know exactly what those things are for an empty array. If you were able to make this assumption, that these things were not replaceable, you could at parse time optimize out this additional array. As it is you can’t. And baseline tiers have to manifest this, even at the higher tiers you usually manifest the array. You can have checks that say, oh, I know everything is intact, and I can do something a little bit more efficient, but you are almost certainly going to be hitting garbage collection for an array for this snippet if it is null. + +KG: So, I'm proposing to make this inconsistent with the rest of the language. Is that okay? In my opinion, yes. Yes. It is inconsistent, but what’s that quote, a foolish consistency and little minds and so on. In this particular case, the trade-off does not come out in favor of consistency, in this case, the inconsistency is worth it. + +KG: I do want to talk about the probability of web compatibility risk. The most serious risk are not people actually trying to do replacement of `Array.prototype[Symbol.iterator].` It is more likely someone is likely to replace `Symbol.iterator` on a specific array. And in that case we run into the override mistake where the frozen property prevents you from assigning to that property on something derived from the prototype. I have heard inklings that this might be fixable. Which I would be delighted by. Anyway I kind of doubt this is coming up very much but I wanted to mention the possibility. And of course, anyone that is actually going to mess with the array iterator can be broken by this. This might be the case for polyfills. I haven’t actually confirmed this is web compatible. Partly because I wanted to bring up the idea that even if we can’t do this specific change, because I think that on a case-by-case basis we should reconsider existing parts of the language and more importantly parts of the language going forward, the trade-off of having the language always be mutable just isn’t worth it. + +KG: So, this first point on this slide, it's important to keep in mind, especially for new features, anything that is frozen in this way cannot be polyfilled and further changes to it cannot be polyfilled. If we added an additional parameter to `Array.prototype[Symbol.iterator]` or more likely to `%ArrayIterator%.next` the only way a polyfill could implement this on a browser that has not shipped is to replace those functions, so this does limit which things can be polyfilled. In this case, it is unlikely we'll make further changes. And so, again, in my opinion the trade-off comes out in favor of making this change, but it is something to consider doing this going forward. I haven’t made this a proposal repository yet. Because I think it is just kind of a—a new idea that I wanted to cast out into the world. If people are interested in pursuing this change, I will make a more serious investigation of web compatibility and a proposal and spec text and so on. But I wanted to see if this would get me thrown out of the room before I went to all of that work. + +KG: For this specific thing, it is possible that there’s other changes we could make which would be similarly valuable. But take a different form. So that they are more likely to be web compatible. So we could have the syntax or the built-ins check IsArray and then skip Array iteration entirely, which would be equivalent to having the iterator frozen, except with manual iteration. This proposal, if it is a proposal, will not necessarily take the form exactly of freezing `Array.prototype[Symbol.iterator]`, we might be doing other stuff in that direction. But at this phase, this is not a proposal, just a way we might go in which we design in the future. That’s all I got. Let’s get to the queue I guess. + +MM: First of all, let me express my appreciation of bringing to the committee just an exploratory topic where you don’t have a particular thing you’re trying to advance right now. I think this particular exploratory topic the issue it raises is exactly the kind of issues that are central to JavaScript need to be discussing and figuring out a way forward. All that said, I think that the inconsistency costs are high here. And I think that the mutability costs on the implementations on the static analysis are both, as you say, very high, but they are high across the existing language everywhere. And high-speed engines already do an optimization where they, they check if—you know, if various things are the original bindings, and if so, do something higher speed, that involves cache invalidation check. What they could do, I don’t know if any engines actually do it, is to do a further check if a property is a nonwritable, nonconfigurable data property, in which case they know the value and that the value of the nonconfigurable nonwritable data property is the original value at which point you can enable the optimizations without a cache invalidation check. And in fact, this is not… the XS engine in the standard configuration where they’re targeting embedded systems, where ROM is much, much cheaper than RAM, in fact, shift in the hardened JavaScript configuration in which all of the primordials are already frozen and therefore, both the code and the implementation can take advantage of what they know cannot be changed. Likewise, at Agoric, we built in userland and primitives hardening which is a transitive freezing by property walk; and lockdown, which hardens all of the primordials to create the, the mode in which we’re operating. That mode switch is moved into the language could also bundle in fixing the override mistake if we can’t get the override mistake simply fixed across the whole language. And then, all of these optimizations would follow. Furthermore, at Agoric we make a lot of use of the fact that polyfills and customizations can happen before a lockdown. So there’s this sort of two-phase execution. Which also is mirrored in a somewhat different way in XS. Where you can run polyfills and customizations and then lock the environment down. And that gets much of the best of both worlds. So I’m really unhappy with the idea of doing ad-hoc fixes on a case-by-case basis because you’re creating a least-surprise nightmare for the programmer. They just cannot remember which ones are this way and which ones are not. + +KG: So you brought up a bunch of points. I have a bunch of things to say about them. One of the first things you said, the cost of mutability are high, or Immutability? + +MM: No, the cost of mutability are high. + +KG: Oh, okay. + +MM: As you state. I think we’re agreed on that. My point is that we’re paying them everywhere else in the existing language. + +KG: Yes. So you brought up the fact that engines already optimize similar things around this. That’s true. It is also, you mentioned the possibility that they could optimize the case where these properties were frozen, and I happen to have read the optimizations in a couple of these engines and they don’t. Presumably because no one is doing that. This is—I—perhaps should have been clearer about who this is for. The benefits of this proposal do not accrue particularly to developers, they accrue to users of the web. Because engines, while they do go to some heroic lengths to do some optimizations there, the optimizations are themselves not free. And often are not practical to do at baseline tiers of the engine. So right now, the people who are paying the cost of the mutability is every user of every web page. And the people who are benefiting are the very few, very few web developers who are actually relying on mutability here. So while I agree that the inconsistency is surprising to the small number of developers who are even aware that these things are mutable at all and who wants to be able to rely on that, my thesis is that the benefit to users of the web outweighs that by several orders of magnitude. + +MM: So I disagree with a lot of it. Let me just zero in on one point. What we repeatedly found with regard to—those who would like to freeze the primordials the thing that prevents them is the override mistake. Something that globally froze the primordials in a way that evaded the override mistake perhaps because it is, it brings a new mode for objects frozen in that way, would also remove the deterrent that has prevented people from freezing. So this is one of the chicken and egg things. The override mistakes prevents people from freezing so people don’t freeze, because people don’t freeze, people who probe the web don’t see the pain of not freezing and they don’t see the pain of the override mistake because people don’t freeze in order to evade it. So if we fix both, I think, that would give many people a much more pleasant JavaScript to program in, as well as a much more pleasant JavaScript for implementers to optimize. + +KG: I definitely agree that would be much more pleasant. I think even in a world where we have that functionality available people wouldn’t use it. Not no one, you would use it. I wouldn’t use it, my code has to integrate with other code on web pages so therefore I will never be able to do that kind of thing. That is the case for almost all websites. Most websites of any size. There is code on that page that is owned by, you know, perhaps a dozen, perhaps several dozen different teams. For that code to interop, none of that code can really be taking responsibility for global stuff. + +MM: Yeah, I’m not sure how much, I know, see there is a queue. I’ll just— + +CDA: There’s a lot on the queue. We have— + +MM: Okay. Okay. So, I will just say, I’ve got a response for that, but for the sake of the queue I will yield. + +JHD: My response, the override mistake pretty much has nothing to do why people don’t freeze. Most people either don’t know or don’t care. I make a lot of engineering decisions in my packages for the purpose of staying robust against built in modification. And a very vocal group of people thinks I’m a terrible person for caring about that. It is fine that applications can break when that happens. These are the types of people that whenever I freeze anything they say “if any part of your program is doing that, I don’t care anymore; It is fine that it is broken”. I don’t agree with that stance, but that is a stance a lot of people seem to hold when they learn about this topic. + +MM: Okay. I think I need to respond briefly. Node.js introduced a, a flag, global flag for freezing primordials that specifically I saw people try to use it and back off because of the override mistake. What we found in our own system where we are, new you know, mitigate the override mistake for particular cases is that it is still the case that when we try to use other libraries, the most common incompatibility has been the override mistake. The way you use the—the way you would use the—you know, the—a operation to freeze primordials is the program as a whole, the main effectively, would be making the decision about what kind of environment the rest of the program is in. And then the libraries that it loads would be ideally written so they could work whether the primordials were frozen or not. And except for the override mistake, almost all libraries that we’ve encounters, except for polyfills themselves, in fact do work whether or not the primordials are frozen. It has been a long recognized best practice not to monkeypatch the primordials, except for ships and polyfills. + +KG: Okay. Thanks Mark. I will put together a another presentation on this topic. We have a lot to say to each other that is relevant to the design of the language here. + +CDA: We have about 10 minutes left. MH is next on the queue. + +MAH: Yeah, so really quick. I, my concern here is that we’re, we can’t really know what the future language evolution will bring and whether something will require a polyfill in these places or not. Particular example in this case is maybe in the future we will introduce iterators that can go in both directions and maybe a polyfill might need to replace the array prototype single iterator for that end. Maybe there are other ways, but maybe that is an approach the polyfill might need to take. That is just one example on the top of my head. This is one example I can imagine the polyfill might need to replace what you are suggesting here and freezing. So I’m concerned we are basically preventing future language evolution or making it harder for polyfills to exist for these Cases. + +KG: Yep. My response is yes. That’s correct. I agree. This would limit the future direction of the language. I think it’s still overwhelmingly worth it because, you know, making every page load two microseconds faster, by limiting the design of the language is just like overwhelmingly worth it. Even though, yeah. It would limit the design of the language in the future. + +JHD: I will just sort of combine mine. So to MAH and KG’s point. I’m actually on KG's side in this sense. I’m a very prolific polyfill author, and we don’t limit language design based on polyfillability. And if we can decide that we are forever not going to make changes to a thing, awesome—we ensured there will not be a need to polyfill (until somebody implements it wrong, and polyfills need to fix it, and can’t). An alternative that just occurred to me, it is not really arrays generically causing this, it’s the spread syntax used with arrays. + +KG: It is both. So, yeah. Definitely a lot of it is, I think the most common use of arrays with the iteration protocol is the spread syntax. Well, actually it is probably destructuring, followed by the spread syntax. But there are a lot of other uses of array iteration. `new Set`, all sorts of things. + +JHD: Yes, but we’d go for a different kind of inconsistency and address the performance desires without creating mutability questions, and finding places like that. In the same way `Promise.resolve` has a fast path if it is actually a Promise—in the same way, if it’s an Array, don’t bother looking up these methods, you just do the hard coded thing, and sure it will still go through the slower path if people are passing a Set into a Set, or spreading a Set, or whatever. But using Arrays like most people always do, it would have the fast path. Is that an approach that has been considered? + +KG: Yes, if for whatever reason we don’t get with the freezing array iterator stuff, that would be a good direction to explore. + +DLM: When we discussed this, we were definitely favorable about this. We think it is worth investigating. As long it is web compatible. I don’t think we would be the first ones to investigate web compatibility here. + +JRL: In general, total agreement, we should do this. I want to confirm before we start pursuing this, are we sure that freezing the prototype will actually lead to performance improvements? There are often multi situations we can do that you highlighted here with the spread syntax and defaulting, but other cases where this will vastly improve iteration, in particular for-of or `Array.from` or other cases? + +KG: That’s a good question. And I can’t know without being able to speak for an engine. But I have poked around the fast paths in engines for iteration. And most of them do have a fast path for when you’re touching an array and everything is intact. I’m confident that this change would make those fast paths both simpler and, you know, more reliable. Something that could be used at earlier optimization tiers, too, because they are not always using those optimizations, they aren’t always enabled at the baseline optimization tiers. Yeah, I’m confident it would make a difference in the cases the engines are not currently optimizing. Also, I didn’t emphasize this, but it is worth calling out. Most static analysis tools make a point of respecting the fact that built-ins are mutable and if the language said that these built-ins weren’t mutable, there would be more, so more opportunities for—tools like bundlers to optimize this, at built in. Now, that’s kind of a thorny topic, in principle they could just have a switch that is "assume everything is immutable", it doesn’t need to be a change in the language. But I do want to callout that I think designing the language for being easier to reason about is beneficial not just to engines, but tools as well. + +MAH: There’s something in, so here about, I think it was on you, one of your slide about which kind of optimizations this enables. My—like, the parse optimization, to me it seems to be the only case if those properties are not born frozen, then you wouldn’t be able to do the parse-time optimizations, but you can only do parse optimizations if you know for a fact that the value is an array and that can only happen if, if you have an array literal. So is there— + +KG: You have a lot of arrays. + +MAH: Yeah, you have a lot of array literals. But like it really feels like, yes. We’re going to spread or we’re going to spread an array literal and that’s the only case where parse-time optimization would work roughly. Why is that explicitly frozen after, sorry, if these optimizations are part of higher tier optimal situations, why do we have to rely on the—on the realm inception time of frozen property here? + +KG: Yeah. So you’re right, in principle an engine could have an optimization that improves codegen if these things are frozen. As a descriptive quality of the world, no one implements such an optimization. Partly because freezing things is basically never done. From my point of view, that loses almost all of the value of this change. Because for our variety of reasons, which I’ll get into a subsequent presentation, I don’t think almost any website would be doing that. They certainly aren't doing so now. Even if we fix the override mistake, I don’t think basically any website would actually make that change. So even if the engines did feel motivated to implement the optimizations it would not benefit the users of the web, because it is not happening. I will get into that in a subsequent presentation. + +MAH: Could we socialize, if there is no current drawback in performance to freezing these, could we socialize with major frameworks that, and major platforms, that please go ahead and eagerly freeze these things there is a potential that the future engines might want to implement optimization. + +KG: There is a drawback: freezing things is a significant slowdown. I think in fact, freezing things might turn off the optimizations that we talked about. Usually the form of the optimizations is, like, there’s a bit that is "has anyone ever touched ArrayIterator? If someone has touched ArrayIterator, turn this optimization off". And of course, freezing ArrayIterator is touching it. It is pretty likely right now a significant performance hit from doing this. And also, separately, web pages are a thing that already exists. Frameworks try to be good citizens by not making global changes. I don’t think we could reasonably ask them to make a change such that when this framework is included on the page all of the other code on the page behaves differently. Even if it doesn’t matter much in practice. + +MAH: Just quickly to your point that it is a reference, I think it used to be. I don’t, I, I thought engines had fixed that—that preference penalty recently. + +KG: Two separate things. Freezing things was a performance penalty, it might still be. But also, the optimizations rely on no one touching stuff—I’m not actually sure, but I believe that freezing array iterator would count as touching it, such that it would invalidate the optimizations that are currently present. + +CDA: Okay. We are past time. + +### Comments in queue + +New Topic: (no need for comment) +1 from me, and Chrome was good with it in pre-meeting Tab Atkins Jr. (Google) + +New Topic: From IDE/Editor point of view it doesn't provide optimizations for static analysis. For code completion too, because if user overrides anything, it's possible add types definition (.d.ts) for a project or a node module. (EOM) Dmitry Makhnev (JetBrains) + +KG: Okay. That’s all I had. Thanks for your thoughts, all. I will have, I suspect at least an hour-longer talk about this at the next—meeting. Because I think, Mark, thank you, there is a lot of good meat to dive into here. + +CDA: All right. There were a couple of comments in the queue. I just want and copy/pasted them very unceremoniously into the notes doc. So folks working on that can maybe clean those up very quickly, I would appreciate it. + +KG: Okay. And then, I don’t have a conclusion because I wasn’t asking for anything. I will go back and add a summary for what I talked about. I think—Mark, if you could add a brief summary of your points, that would be helpful. I don’t want to mischaracterize you. + +MM: Sure, thanks, I’m looking forward to continuing the discussion. Feel free besides the next plenary, feel free to bring this to TG3. I’m sure everyone in TG3 is very interested in discussing it. + +KG: Yeah, that’s a good call. I will. + +### Summary + +KG: Making certain parts of the language frozen might be worth it to make code easier to reason about and optimize. This would make the language consistent, but it is in my view worth doing. One place to start would be with ArrayIterator, which is very common and rarely replaced and as such speeding it up would be very valuable. There's also the possibility of making other changes to make array iteration faster without necessarily specifically freezing ArrayIterator. + +MM: + +* Same performance problem exists all over the language. High speed engines already optimize many “if this value is the original value, then optimize”. The existing engine optimization strategies would do as well here. +* Object to “solving” it on an ad-hoc piecemeal basis, as the inconsistencies create an unsolvable least surprise problem. +* the only further perf benefit would follow from engines additionally testing “isFrozen” or “is a non-config, non-writable data property with original value”, which a) doesn’t require any language change or inconsistency, and b) could be applied everywhere it would help, not just here. +* Chicken and Egg Trap: People don’t freeze prototypes because of override mistake. Fixing the override mistake wouldn’t help because nobody freezes prototypes. (Note disagreement on this point). +* Fixing override mistake for language itself might be web-incompat. A new operation that implies freeze, like harden or lockdown, could also bundle in fixing override mistake, breaking the logjam for those who’d like either better defensiveness and/or a promise of higher performance. (Would still be better to fix for full language if possible.) +* Aside from the override mistake and polyfills, most code is already compat with freezing all primordials. (Long recognized best practice is “don’t monkey-patch primordials”.) +* For embedded, where ROM is much cheaper than RAM, XS already freezes all primordials. Likewise for defensiveness and isolation, Hardened JS already freezes all primordials. +* Empirical question: what perf benefit of this proposal vs testing if the property in question is “frozen”? Either would require new impl work. + +### Conclusion + +No conclusion + +## Proposed code of conduct addition: "write your own comments" + +Presenter: Kevin Gibbons (KG) + +* [slides](https://docs.google.com/presentation/d/1bGwg5fEYa_q65-o-qr8nbSZs5FuDHEeawTf_xGVwk4w/edit) + +KG: So this is not a proposed change to the language, but a proposed change to the code of conduct or potentially some other document. I prevented this as a change to the code of conduct, where I personally believe it goes, but there might be other opinions, this might be a change somewhere else. + +KG: All right. So what’s this about? I think we can generally agree that submitting comment or a post or contribution to any forum that is under your name which you didn’t write would be considered bad conduct. I think this is broadly agreeable. I hold this is the case even if the comment was written by an LLM. Which is to say by a tool you’re using and not by you. + +KG: So as someone who participants in a lot of our forums, including notably the discourse, people have been writing a lot of LLM authored comments. This is understandable: as the tools get more coherent people who are enthusiastic about them and are maybe a little bit less confident in their own writing are increasingly inclined to just let the LLM write a comment and have it make the case for something that they would like to argue for. And this isn’t currently something that we forbid in the code of conduct. So, while I have been telling people on an individual basis like "hey, I think this is kind of rude, please write your own comments", nothing we have currently covers this. I personally, don’t actually want to read comments written by an LLM basically in any context. But I should say, I’m talking about only by prose here. I think code is its own thing. I’m only talking about—I don’t want to read comments that an LLM wrote. I think that people who are doing this are pretty much wasting your time. I think that all outputs of an LLM, to a first approximation, are generated by a human typing their actual idea into a tool and then the human copy/pastes the tool’s output into some other forum, and I would prefer that the human just put the thing they were putting into the L LM into that forum. If the things they were writing is not sufficiently developed to put into the forum, then I don’t think it is sufficiently developed even after running through an LLM. I think that people who are submitting these comments, even if they have a disclaimer saying an LLM wrote it, which they almost never do, are pretty much wasting people’s time. + +KG: And I think that in general, bad conduct should be forbidden by the code of conduct. I think that is what it is for. More generally, I think this is about how we interact with each other, for the broad value of "we" including all members of the JavaScript community. It is not really about what tools you’re using. I think using an LLM that you’re talking to or to prototype something is pretty reasonable. They are wonderful rubber ducks. Sometimes they can even come up with ideas you would not have thought of. I’m not an LLM hater, I quite like LLMs. So this is not supposed to be governing what your tools you’re using, this is about how we are interacting with each other. + +K:G And so I have a concrete proposal for an addition to the code of conduct. Which are the two paragraphs you see here. I don’t like reading off of slides, but because I think this is important, I'm going to read them out loud right now. + +KG: "Any contributions or comments must be your own writing, not the product of LLMs or other tools. Do not prompt an LLM to expand on or explain your idea and then post the output; just provide the idea itself. Machine translation is permissible, including translation by an LLM, but your use of translation should not introduce any new content." End of quote. + +KG: There are a lot of people who don’t speak English and only interact through translation tools. While there is something to be said having people post in their native language and asking readers to do the translation, in fact there are several things that one can say in favor of that, I think in practice having the forums only being in the language that most readers are participating in is really valuable. In part just for things like, being able to search for topics and that sort of thing. + +KG: Yeah, so this is my proposal. I have tried to write it in a way that doesn’t denigrate the use of LLMs. As I say, I actually quite like them myself. This governs, you know, what you are actually posting for other people to read. + +KG: Before I get to the request for consensus. It is worth mentioning some other policies and other standards bodies have taken. I think the ISO policy is probably the most relevant since it possibly already governs us. It has a bunch of text, but a lot of the text in the ISO policy is ways to productively use LLMs, which I don’t think we necessarily need to cover. The most relevant bit for us is this snippet here, "don’t use images or text created by generative AI in any ISO content, either internally or externally". I don’t know if ISO content covers things like discourse. But because we are participating in, I forgot what it is called, our standards go through the ISO fast track, it is possible this basically governs us already. That said, I think even if it is does, it is worth writing down ourselves. And then the ACM also has a policy that says it is permitted, but must be disclosed. And then also says a bunch of other stuff. Personally I don’t think this goes far enough, like I said, I don’t want to read LLM outputs even if they are labeled as such. So yeah, this is my suggestion for an addition to the code of conduct. + +JHD Yeah. So I am super on board with establishing norms that people shouldn’t use generative AI for their contributions and comments. And I say generative in the sense, if you’re just using it for understanding or explanation go nuts. They are great tools for that stuff. But I’m solidly in your camp, I also don’t want to read LLM output pretty much ever, unless I’m talking to the LLM myself. I think that, I have said this before a few times, I think the code of conduct is the wrong place for it. The word conduct has a broad meaning. That’s not really the purpose of the documents. I think that the—the part where someone is sort of knowingly or callously or intentionally wasting our time is already covered by the code of conduct. Regardless of whether they are using an LLM or not. And the point to which they are unaware of the collateral damage caused by their tooling choices is not really to me a code of conduct violation, but it is something we still want to discourage. And separately, I suspect, 100% of the people the policy is for will not actually read this until we link to it by way of explaining why we hid their comment or something. I think like and even translations can change the meanings of things. There’s a point on the queue later about that. So I certainly agree that we shouldn’t be discriminating against people because they don’t speak English or any other language. But I think it would also be fine to say just post in your native language and we will translate ourselves with all of the tools we have available. I have in fact communicated on the JS Chinese interest group and post English comments and I’m translating on my end I do not understand mandarin. There are a lot of approaches, I think the logistics can be discussed separately. + +KG: I would be interested to hear more about what you think makes something appropriate for the code of conduct or not. It feels to me like this really falls in what I think as the code of conduct? + +JHD: I know this is going in the notes, so you know, please don’t hold me to this; I have not thought it through in detail. To me the CoC is more about behavior and the social contract. LLM usage is a new enough thing that it is not like a wide, like, There’s a lot of things that are widely accepted to be rude or unseemly or unfriendly, and generally codes of conduct try to cover those things (and with perhaps expanded definitions from historical ones to to be more inclusive). In this case, if we didn’t want somebody to use Google translate back in the early days when it didn’t do that great of job for the same reasons, that wouldn’t be a CoC thing that they are trying to translate their idea—it’s just “this tool causes problems—please don’t use this tool.” You’re not behaving badly by using the tool, but we want you to use a better one so our time is not wasted. That applies here withLLMs. Maybe in a few years, LLM-generated content will not be distinguishable from human-generated content. At that point, it would not be bad conduct to use the tool if the readers’ time is not being wasted. Right? In other words, the CoC is about “are you being a jerk?”, not “are you doing something I don’t like”. + +KG: I guess, yeah. I agree that it should broadly be like, are you being a jerk. But I think that’s a super subjective thing, which is why we have a CoC at all. There is a lot of behavior which many people consider reasonable ways of interacting with each other which is forbidden in the code of conduct. Precisely because—the whole reason we have to write it down is not everyone agrees with it. That is okay. It is okay that people have different conceptions of what is rude or not. But personally I think posting LLM output, especially LLM output that isn’t labeled, but even it is, should be considered rude. + +JHD: I agree with you, but there is no where near a societal consensus at most regional scales that that is the case yet. + +KG: Okay. Okay. So, the issue is less that well, yeah, for you it is more about—if this is something that wouldn’t be writing down an existing norm. + +JHD: Yeah. Like the dust hasn’t settled on the social contact around LLM usage yet. I’m in the same bucket as you when I read LLM content from other people, but I think it is premature to put it in the CoC. But let’s put it somewhere, and link to that place, and tell people not to do this. + +KG: Okay. Sounds good. Let’s skip to the queue. + +TAB: Yeah. I think this is really good in the code of conduct, actually, specifically using the argument that Jordan was just talking about. Where like the, behavior we’re trying to legislate against in code of conduct is don’t be a jerk in various ways there. Is not a single person who is posting LLM content who believes they are being a jerk. They are thinking they are doing something meaningful and useful to contribute to here. They are extremely wrong. There is nothing you can get out of the comments that we spend worth to time reading. That is part of the point where we need to make sure this is captured well. This is considered antisocial behavior by the committee. We don’t have to call it being a jerk. But it is antisocial. It is doing things that do not help the committee’s work, and in fact hamper it by causing us to waste time looking at it in exactly the same way that spam does. More stuff that just nobody needs to be reading or should be looking at, we want to dur courage people from looking at, even if they don’t believe or understand at the moment why that would be the case. Someone just touching on social norms for a bit. Like things like this are part of what establishes the broad social norms for everybody else. I think it is perfectly fine to, you know, take a bold step in sometimes. We can always change our mind in the future if things change. We don’t have to be neutral right. Right now in this moment, we know LLM output is garbage for technical discussion. We should take that stance because it is a, a valuable guard to put in place against anyone trying to usefully contribute. If they want to contribute and have ideas I would like to hear their ill-posed ideas, many of us got better at it over time. That’s perfectly acceptable. To read some beginner’s idea and iterate on it to make it good if necessary. I do have a little bit of a bias against LLM translation just because LLM-based translations more likely to mutate meaning than other forms of machine translation do. But it is a small enough issue that I don’t care very much about it. I’m happy to leave that out. But the LLM generation text sounds very good. How you phrased things in your proposal sounds pretty great to me. I would be happy with that. + +NRO: Yes. I don’t know. Where think this would be in the code of conduct or not. But we need to have this. I’m happy to hear people say we need the have this. But just not put this anywhere. We need the have it in a place that lets us point to new motivation. So, in a place where any delegate can like, that moderates their own proposal repositories need to be able to point to the document where it is at, and say, hey, I’m hiding your comments because this document gives me the power to do so. And like it doesn’t matter whether it is in the code of conduct or in a "ai-policy.md" that is filed. But the needs to be some where it is explicit and that we can rely on. + +CDA: I’m next on the queue. And I agree with those comments. Our topic is, it is decouple the guidance from where it lives. I don’t want to get hung up it belongs in the COC or not in the COC. It is a question we need to answer, don’t get me wrong. We’re looking at a slide of a proposal addition. So it can be in its own document, maybe it goes in the COC. I don’t know, but if we focus on what we want the content to be, the permissible use of generative AI to be. We can come up with that, and then go on once we have that language exactly how we like it, we can decide what is the best place for that to live. Keith? + +KM: Yeah. So I guess my question is like, I think, I would assume, I haven’t really tried LLM for translation, but I assume it does change the meaning, potentially of what you’re saying. Maybe this is wrong, so I just want to make sure that we’re okay with that, like, and—I guess, I guess, I didn’t put this, as my topic, but also I suppose, are we considered something, like "don’t prompt an LLM to explain your idea and post the output, it requires you to go back through the generated output and refine it and distill it". So it shouldn’t just be the raw output. I can imagine using it to edit your thing, that is worth the distinction. And leaves an artifact that is an LLM, is this person going to get banned because they had it check their grammar. Because based on what is stated here would not necessarily, seems like that would not be permitted under the code of conduct to have it using like—I don’t know exact way to phrase it, but the general idea. + +KG: Yeah, let me talk about both of those. So for translation, they’re pretty good. They are on par with a not very good human translator. They’re not approaching the level of a skilled human translator, but they’re pretty good. The thing I mainly mean to capture here is like, if you just ask it to translate your thing, that’s fine. If you ask it to translate and expand your thing, that’s not fine. I’m not super worried about trying to nit-pick like, did this come out of an LLM or not. I think that’s generally not possible. Humans and LLMs have a sufficiently large overlap in their style that you can’t just look for em dashes or whatever. I’m more concerned about, and this is part of why I wanted to go in the code of conduct, I’m more concerned with governing what people do, and I want people to feel comfortable asking for a translation and not feel comfortable asking for a translation and also refinement. + +KG: As to the question of using it refine your grammar, or like whether we say that you’re allowed to post the output if you go through it yourself and clean it up. I don’t like those things. I really don’t like the idea of allowing people to sign-off on the LLM’s output and say “I participated, therefore it is good”, because my experience is that, especially people who aren’t super experienced with an area—not all people, but many people—have an absurdly high degree of trust in the LLM’s output. They sort of ramble into it, and the LLM will generate an output, and then they as an non-expert will look at the output, “yes, yes, that’s what I was getting at, that is great!”. And it is incoherent in a deeper sense. I really don’t want people to do that. And I don't want the guidelines to suggest that kind of thing is okay. I do want to capture it is okay to talk to an LLM. I just don’t want people to put the LLM's output, even if they gone through and cleaned it up themselves. + +KM: Quick thing, sorry. So, do we have thoughts potentially on maybe extending the second section to say like machine translation is permissible and also machine proofreading is permissible? It sounds like that was a consensus thing, calling it out might be useful. I can imagine reading the other part, and say, I cannot put it through an LLM except to translate it. + +KG: Yeah. So I’m okay with something in that direction. But I’m worried about is people taking the thing and putting it in an LLM and saying, you know, “hey, please clean this up for presentation to TC39”. Which if you do it, it will like rewrite it pretty substantially and introduce a bunch of supporting arguments and stuff that you didn’t make. So I really, really want to make sure that we don’t encourage people to do that. Maybe copy-editing that—yeah. It is possible that we can find some way of making it clear that like specifically having it check your grammar and punctuation is okay, but not having it reword sentences or something. + +CDA: Waldemar? + +WH: I have a few items here. One is, how would we enforce this? Here’s a potential scenario: I’m one of those people who have been using em dashes in writing for decades. I’m afraid that somebody might report me to the code-of-conduct committee because I used lots of em dashes in a submission and therefore they think I used an AI. How are we enforcing this? + +KG: That’s an excellent point. There is not any objective way to do this. The cases that I have encountered have largely been pretty obvious, and also people aren’t usually going out of their way to hide it. I don’t think many people are being malicious about this, they are generally not aware this is something some people consider rude. So what I have been doing is asking people not to post LLM outputs in cases where I’m pretty confident it is what they’re doing. And the responses I tend to get are: Either like, “oh, sorry”. Or like “why are you a hater, LLMs are great”. Or whatever. Not “I’m not using an LLM”. Perhaps that is an argument it should not be in the code of conduct just so that it is not something that we have to worry about how to enforce. But, as much. Yeah. I agree this is an issue. I think it is more serious issue for something like academia where there are more serious consequences. Where here, I I think we would just be hiding people’s comments, pretty much. + +WH: Yes. My other item is that views differ on what kinds of LLM usage is okay. Cases that frustrate me include when LLMs expand a small amount of text (the prompt) to lots of bland text. I don’t want people to use LLMs as a text multiplier. + +WH: Note that, in order to discourage such text multiplication, we should avoid requiring folks to write boilerplate. Lawyers are dealing with that—there is a lot of boilerplate going on in legal briefings, which encourages everybody to use LLMs. Nowadays almost every day lawyers get into trouble for it. In fact, I have seen last week that even U.S. judges are getting into trouble for writing their decisions using LLMs and hallucinating things. + +WH: I don’t have as much of an issue with using LLM for things like grammar cleanup or for things which do not expand the size of the text. So if you want to use an LLM to take a large rambling letter and compress it into a small one—be my guest, that’s fine with me. I don’t know if we all agree on that. But I want to point out that I think one of the main issues is just having to wade through a lot of bland text rather than text which expresses ideas succinctly. + +KG: Yeah, definitely agree that is the core thing. And maybe something in that direction can be useful for thing thing we were talking about on previous point about, how do we clarify that it is okay to check your grammar. Maybe I can come up with something that is, like, it is okay to have it check your grammar, but the output should be basically the same size or smaller than the input. If the output is notably larger than your input, you’re not just having it check your grammar. Or something like that. And as for the specific example having it compress larger text, I think I want to still rule that out, just because I’m really worried in that case that the LLM will still introduce its own ideas that the human didn’t. But probably that is a lot less likely in that specific case. So maybe that would be okay. + +CDA: I just wanted to reply to the comment made about, don’t want to be reported for a COC violation because you’re using em dashes and the like. We’re not going to enforce like, you know, prohibitions on usage of certain unicode characters, and certainly that alone is not enough to suggest somebody has Used, generative AI for their contributions. And if somebody is submitting dubious violation reports about somebody just because they’re doing that, I mean that action in and of itself is a code of conduct violation. So I wouldn’t worry too much about what, what—you know, em dashes word is inserting when you type double dashes manually or anything of that nature. Mark. + +KG: Can we get a time check? + +CDA: Yes. We have less than, we have like a minute and 10 seconds. + +KG: Okay. So, we’ve got a couple of things in the queue. Mark says he appreciates the acceptance for translations. I think that it sounded like people are broadly supportive of this text existing somewhere. I heard Jordan doesn’t think it is a good idea to be in the code of conduct. And several people didn’t have strong opinions. Personally I lean towards having it in the code of conduct. But it is not super important it be there precisely. We heard ideas for additional things or clarifications we might have. That could come back at a later meeting or we could discuss on GitHub. But I wanted to see if we can get consensus on something we can land on now and iterate on further. So a proposal for the committee is to take the text that is currently on your screen, put it in a new file that is called AI_policy.md in the TC39 how-we-work. And in the code of conduct we have a section that is like "see also our policy on use of AI tools". And then link to that separate document. And that also gives us a good place to expand, like, put in clarifications for proofreading and so forth. So like to call for consensus for that specific change, unless someone wants to bikeshed it. + +JHD: That sounds good to me, I wanted to add also, for example, GitHub Copilot, will look for a document in the .github folder (that will probably cover the whole org) and include instructions to embody the spirit of this. And other AIs may have similar mechanisms that can guide them towards the output we want, or to warn the user that this policy applies and linking them to it or something. So I love the idea of putting in a separate file for now. If one of the future iterations puts it in the CoC so be it, but we can talk about that separately. + +WH: Before it goes final I would like to iterate on the text of it, to make sure using AIs for copy editing and such are permissible. + +KG: Okay. Well, I’ll open a PR to how-we-work. And I’ll put the link in the delegates channel when I get it. And maybe we can work on something there. And are people okay with landing it once everyone active in that thread is happy with it, assuming that the only meaningful change between now and then is clarification on explicitly permitting use of LLMs for checking grammar and so forth? + +KM (on queue): ok on this and circling back with future feedback (eom) + +CDA: My suggestion would be to create a PR with the text and then the folks from that thread and everybody here who is interested now or in the future we can iterate on what the text should be with review comments and all of that. Does that seem reasonable? + +KG: Yep. Mostly I don’t want to have to come back to committee with a second thing. + +WH: It would be nice to let all of us know what we decided on. + +KG: Okay. Okay. That’s a good point. Okay. I will do that. So, some bike shedding, but we are in favor of the idea. Sorry. + +### Summary + +KG argues that posting LLM outputs for other people to read is something which should be forbidden by the code of conduct, on the grounds that this is mostly a waste of the reader's time. ISO policy may or may not govern and generally forbids use of LLM outputs in ISO materials. JHD agrees with the sentiment but does not want this in the code of conduct specifically. TAB thinks it should be in the code of conduct. WH wants to ensure that use of LLMs for checking grammar is acceptable, and raises the point that the main concern is when LLMs expand a small amount of text into a large amount of text. + +### Conclusion + +Consensus for having this included, but not in the code of conduct at least for now; tentatively in AI_policy.md somewhere which is linked from the code of conduct. There’s some bikeshedding around the precise wording—in particular, people want to make sure that use of LLMs for clean up grammar and so forth is permitted. + +## `Math.sumPrecise` for Stage 4 + +Presenter: Kevin Gibbons (KG) + +* [proposal](https://github.com/tc39/proposal-math-sum/) +* [spec PR](https://github.com/tc39/ecma262/pull/3654) + +KG: So, `sumPrecise`. Stage four. This proposal I don’t have slides for because stage four is, everything is done. This is not a request for further feedback. But just a reminder, it is a proposal to include a method called `Math.sumPrecise`. That gives you numbers and gives you the sum. It gives you the most precise sum possible, given that both inputs and outputs are floating point numbers. It is specified as if you were doing arithmetic on real numbers, but obviously implementations don’t work that way. + +KG: Speaking of implementations, there are two in stable Safari and stable Firefox. At least I think it is stable. V8 has not shipped, I don’t think they have implemented. But the requirement is two implementations. + +KG: There is a PR to the specification, which the editors haven’t all signed off on. I’m one of them, I’m happy with it. It is pretty straightforward. It is the same text that was there previously at the earlier stages. + +KG: So, I believe, requirements for stage 4 have been met, or requirements as we typically interpret them. So I would like to ask for stage 4 for this proposal. + +WH: I support stage 4. + +CDA: We have a plus one from WH. Also from DLM. Also from Dimitri. Keith is on the queue. + +KM: I think technically, we have not shipped this in a stable release. But I’m fairly sure, it could be the notes were wrong? It might have been a mistake on, the notes. I think that was just the implementation of it rather than actually shipping it, which—probably should have been clarified. So our bad. So the patch to enable it by default landed on July 24th. So— + +KG: Yeah. Okay. + +KM: But, we have not shipped it yet. But yeah. It is implemented. So I think it is fine. + +KG: I don’t think we expect any compatibility issues. + +CDA: On the queue with an enthusiastic support of plus four. + +KG: Stage four, I take it. + +CDA: That’s an "EOM". So we may never know what the true interpretation is supposed to be. + +KG: Okay. Is anyone from Chrome capable of speaking on this? Since they are the ones not yet implemented? + +OFR: Yeah. I can. Certainly no objections. It is on our backlog basically. + +KG: Yeah. Okay. Sounds like we have consensus then. Thanks very much. + +CDA: Thank you. + +### Speaker's Summary of Key Points + +* The `Math.sumPrecise` proposal is shipping (or almost shipping) in Safari and Firefox and has an open PR to the specification, and as such meets the requirements for stage 4. Chrome does not object to it going to stage 4 prior to their implementation. + +### Conclusion + +Proposal has stage 4. + +## [Temporal](https://github.com/tc39/proposal-temporal) normative PR + +Presenter: Nicolò Ribaudo (NRO) + +* [proposal](https://github.com/tc39/proposal-temporal) +* [slides](https://docs.google.com/presentation/d/1xaHux5EvR9zXQWnnr0r76ffCdDejmy383XrkrmSPStk/edit?slide=id.p#slide=id.p) + +NRO: Hi. Yeah, I’m presenting for PFC, because he couldn’t be here at this meeting. + +NRO: So temporal normative change. There is one open request that we’re asking consensus for. It was opened based on some feedback from V8 which was implementing the proposal. You may know that V8 is separating the actual Temporal logic from the JavaScript interface by using a Rust implementation for the logic. So this causes difficulties with the way the proposal is currently specified. Specifically, in some Temporal methods, like, for example, all of the `until` methods or, I believe `durationToString`, but you can check the pull request, there is currently reading from options with options back. And the thing, first alphabetical order we get the property from the object. We cast it to, like, proper type. So and, by cast, I mean, cast to a javascript primitive, like, for example, casting to string. But for some of these options, they have some sort of enum values, like, for example, there are a bunch of options to accept a unit string like seconds or days. And by casting we also check what is the enum value corresponding to string in there and do some validation of the string. + +NRO: The pull request changes the get-cast-validate, first get all of the casts and then do the API addition at the end. Notice there is already some validations happening at the end, it is called validation, which is validation that requires putting together multiple option values to check if they are correct or not. And the observable effect of this change is that in some cases an error might be delayed. So maybe you would see an error that, before this request, it was after the driver. So you may get a different error or you see a getter that is triggered by the error rather than not being triggered because of the error. + +NRO: Like, well, this is a concrete example of the change that is before the request. We have this PlainDate API. We pass like a large option, and what it was doing is getting the largest unit, casting it to the unit enum and then checking the unit, in this case, plain date second is not unit. PlainDate starts on the dates in this case, it is, at this point, while with the change, we would get largest unit and cast it to a unit. So the second for a valid unit, and then the same for the next unit. And check that the units are valid for this specific method. This is an example, in the pull request there are a couple of methods with this change. Any questions? I see the queue is empty. + +CDA: Keith? + +KM: Yeah. So I guess if we’re going to change the order it, why not change—I haven’t looked at Temporal too much, but so maybe I’m just not understanding something, why not do the casts after all of the gets, do all of your gets, load the properties, and cast them to the units? do the casts of the unit invoke other JavaScript or the string. + +NRO: No. Casting it, for example, invokes ToString. + +KM: And then when it is casting to, checking it is a valid unit, it is checking presumably some list. + +NRO: Yes. That is correct. + +KM: But the list in theory. Yeah, I don’t really care too much, I guess I’m trying to find a way to check if there is a valid unit part in the validation half, rather than the other thing. Is it because it is broken down that way and a smaller PR. + +NRO: If SFC is on the call, feel free to jump in, because you are in the pull request. But I understood there are two reasons here. One it is just the much noter or change and requires fewer changes to the proposal rather then moving all of the validation to a different place. When this issue was first proposed, the champion group understood that the proposal was to move all of validation, there was skepticism because of the change would be. And the second, more practical question, this request came specifically from the V8 implementation, what is happening there, there is the Rust library that has a unit type. And the JavaScript interplayer has to get the options and get the cast unit type and then pass in it elaborate and then elaborate is checking it is a valid unit rather than passing a generic string so it doesn’t have to pass the JavaScript string to the string indeed, whatever rust uses. But just the string. + +KM: Got you. Okay. Yeah, that’s fine. It makes sense. I guess. + +CDA: Okay. No one else on the queue? + +NRO: Hey. So if there is nothing else, yeah, I would like to call for consensus for this request. + +CDA: Support from DLM. + +NRO: Okay. + +CDA: Any other voices of support for the normative change? plus one from OFR from V8. All right. You have consensus. + +KG: Sorry. Sorry. I—should have gotten on the queue just before this. + +NRO: Yeah. + +KG: I think this is potentially relevant to other proposals that use option bags. I’m about to present one of them. I don’t want to present the whole thing right now. I just—yeah. Keep this topic in mind for the next topic, I guess. + +### Speaker's Summary of Key Points + +NRO: Okay. Thank you. I have a summary here. We discussed the normative changes in broad proposal which slightly changes the order in which some options happens to better align with V8’s implementation needs. + +### Conclusion + +we have consensus for the change. + +## [Uint8Array base64+hex](https://github.com/tc39/proposal-arraybuffer-base64) for Stage 4 + +Presenter: Kevin Gibbons (KG) + +* [proposal](https://github.com/tc39/proposal-arraybuffer-base64) +* [spec PR](https://github.com/tc39/ecma262/pull/3655) + +KG: So, similar to the last item, I have another stage 3 proposal that is implemented in JavaScriptCore and in SpiderMonkey. I believe it is shipping and has been for a minute in Firefox, although not 100% certain. I guess probably not in Safari. And again, not shipping, but in this case, it has been implemented in V8. And I know they are planning on shipping to stable in a month or so, a little bit over a month. In addition, the specification PR is open and all of the tests and everything. + +KG: But before I ask for stage four, I wanted to talk about the previous topic—or the relationship of this proposal to the previous topic. So this proposal is one of the first things that we would have in Ecma-262 that has an options bag. Depends if you count error cause, but that one wasn’t relevant to this topic of how do we handle order of reading properties versus validation. In particular, this proposal has now a `lastChunkHandling` option and an `alphabet` option. `lastChunkHandling` is for decoding and alphabet for both. The way that I have this specified right now is that we read properties, and then, in this case, we are not casting, we are just confirming they are a specified type. We don’t cast to string in this proposal. We read the `alphabet` option from the bag. And then check if it is valid and then throw if not. And then read the lastChunkHandling. This is how it has been specified in the lifetime of the proposal. Keeping with general philosophy of reading and validating options interleaved. Also not in this proposal, but in some other proposals like the `Iterator.zip` proposal, there are some cases where we wouldn’t even read an option unless some other option was present—for `zip` that is the padding: you don’t read the padding unless you’re in the mode that uses the padding. + +KG: So, I’m asking for consensus for the proposal as it is. Which has some amount of validation. You could consider it casting if you would like. But some amount of validation/casting for options interleaved with reading those options. And in light of the topic that we just discussed, I want to make sure that is still what we’re planning to do for this kind of thing going forward. Yeah. So I would like to talk about the options case and then do the stage four thing. + +NRO: Yeah. So when I first learned about the Temporal change, my first reaction was like, well, we should just have this presentation for setting precedent about how we handle option bags. But then I realized that Temporal change is much more narrow then change from option bags were. And even Temporal is still doing get, if is not on the list, the difference is that in Temporal there are multiple versions of the same function. Like for the `.until` example I had, there is `PlainDate` and `ZoneDateTime.until`. So all of these `until` functions are in the same get checking this list checking this list validation. And then the difference is that throwing that error by way of, in the case of the unit was not correct. And then, after doing all of this validation, it was doing a per-function specific logic that was, for this specific version of the until function, actually those three units are not valid and a range error giving us one of the two involved units. But the base logic is still the same as what you’re showing here on-screen. + +KG: Okay. Great. I’m glad to hear that. I’m comfortable going forward with this as is in that case. But I guess, yeah, I’ll see. + +JHD: Yeah. So if I understood NRO’s comment correctly, then maybe the thing I want is already the case. I’d like all options bags to be the same, in the sense that like, not that this is the way that we would care about it, but I want there to be a single abstract operation that everything with options bags uses, except for a finite never-to-grow list of exceptions. It is fine if error cause is different—it needs to differentiate absent and undefined; it is fine if there is legacy stuff we cannot change, and similarly speaking, but whatever logic goes in for base64, I want everything using options bags after this to use that logic if possible—and I say that not knowing what the use cases are. Maybe I will change my opinion on an ad hoc basis, but based on all of the proposals I know coming with options bags. Is that the precedent that you’re intending to set? + +KG: To the extent possible, yes. So in particular, there is the GetOptionObject AO, which was previously in Temporal and 402, which we use, and then we do the lookups using get. And that’s all the same. The difference, I think, is different proposals will mean a different thing by validation, and Temporal means a particularly complex thing, where there are values that are coherent and values which are incoherent, and separately values that despite being coherent are out of range for this particular method, which isn’t something that is relevant here. So I think the broad strokes pattern of, you get options object and then do a get and then check that the value is coherent and then do another get get and check the value is coherent and so on. + +JHD: Right. And validate them as much as possible in that moment, basically. And if you need to do further validation later that’s fine. + +KG: Yeah, basically. + +JHD: Okay. Cool, that sounds great to me. Yeah, I would love to see this continue to stage 4, and am still fine with the Temporal change. Awesome. + +CDA: Steven. + +SHS: Yeah, I’m a little bit less clear about the details of all of this. But no one's brought up the discussion from two months ago about WebIDL may be relevant, so I wanted to see if it was relevant. + +KG: Yeah. I think if we are doing any sort of WebIDL that specifies types we are going to have to be careful to ensure that the way that types are handled in the IDL match up with how we are using them in the language. But I think that we want to design the language first and then design the IDL to suit our needs rather than the other way around. + +KM: I think this would fit onto, in the same model of like you can think of these base64 URLs, like the valid inhabitants of the types for like, types loosely here, of like the enumerations of possible values, your IDL would say, like these are, in the Temporal case it would have had all of the units. Once you narrowed down to all of the units in the generic way, and did it for your operations in the IDL spits out and doing the checks afterwards and all of the the inhabitants(?) are base64. You’re doing kind of the same thing—enumeration. + +KG: Yeah. That sounds right. + +DLM: Yeah, if we’re done talking about options bags. I throw my support for stage four. + +CDA: Great. We also have support for stage four from KM on the queue. As well as MM with a +1,000, that is above our previous magnitude of +4. + +KG: Okay. Well, hearing explicit support from several groups and no objections—Oh, sorry. I meant to ask, again, I want to confirm that Chrome is okay with this. I assume they are since they are almost shipping. If someone from Chrome is able to say that they are happy with it. + +RMH: Yes. Yes. Yes. We are okay with that. Thank you. + +KG: Okay. Thanks very much. All right. Going to take that as stage 4 then. Thanks very much. + +### Speaker's Summary of Key Points + +* The base64 proposal is in Firefox and Safari and implemented in Chrome, and has an open PR to the spec. The proposal as currently specified does get-validate-get-validate (as opposed to get-get-validate-validate) for options read from options bags; Temporal does something similar but also does more complex validation (that the options are in range, not just coherent) after reading all the options. The committee is happy with the proposal as specified and intends that we follow the pattern in this proposal for future additions which use options bags. + +### Conclusion + +* Proposal reaches stage 4. + +## Iterator Sequencing for Stage 3 + +Presenter: Michael Ficarra (MF) + +* [proposal](https://github.com/tc39/proposal-iterator-sequencing) +* [slides](https://docs.google.com/presentation/d/12zBgwq7qSpf-GXySU0q6t46vTdYAyFxaKW26J8D5IiE) + +MF: So, this is iterator sequencing. Remember that iterator sequencing is a stage 2.7 proposal. It defines one new method on the Iterator constructor called `concat` that takes 0 or more iterables and yields all of the things they yield in order. + +MF: The big change since last time, remember we discussed last time that Mozilla made a couple of requests for trying to make behavior match `yield*`. We had made a couple of changes in that direction and identified more, and we identified that even if we made more changes it still wouldn’t allow an implementer to implement in self-hosted JavaScript using `yield*`. So we decided at the last meeting to back out those changes that were going to in the direction of `yield*`, because overall they were negative for the proposal other than the perceived benefit that we might be able to self-host them using `yield*` in JavaScript. + +MF: So that’s what we did. Made this pull request, number 26, that was backing out previous pull requests. It simplified some things. Also, the tests kind of landed and unlanded, they shouldn’t have landed, but they were backed out. Now, they are the way they should be. So the tests are up-to-date. I think we are finally ready for stage 3. That’s my whole presentation. + +CDA: DLM. + +DLM: We support stage three. + +CDA: Do we have any other voices of support for iterator sequencing stage three? + +MF: It would be great to hear at least a second voice of support. + +KG: I support. + +CDA: Support from KG. + +NRO: I also support. + +CDA: Okay. + +MM: Support. + +CDA: From NRO and MM as well I heard. Not seeing anything on the queue in form of objection or dissenting opinions. I believe we have stage three. + +MF: All right. Thank you. + +### Speaker's Summary of Key Points + +* Proposal is no longer trying to match `yield*` for ease of implementation, as it had undesirable effects on the proposal and also did not succeed in easing implementation. + +### Conclusion + +* Proposal advanced to Stage 3. + +## Upsert for Stage 3 + +Presenter: Daniel Minor (DLM) + +* [proposal](https://github.com/tc39/proposal-upsert) +* [slides](https://docs.google.com/presentation/d/15J_tgYqrh-aPat0klS78BcDVGYJ88HOCC_SBsYkR4QY/) + +DLM: Okay. All right. I guess I’m ready to go, so presenting on upsert, hopefully for stage three. Brief reminder of the motivation or problem we’re trying to solve. Basically it is when you have a map and not sure if the key is already present and you’re doing an if statement to see which behavior to use depending on whether the key is present or not. Proposal solutions, add two methods to map and weak not. One is get or insert, which will search for the key in the map, and if found, return the value. Otherwise it will insert the value in the map and return that. And we have a get or insert and computed, which does more or less the same thing, but it will call a callback function and use the result of calling that callback function. + +DLM: So last time it presented this was in April. Work since then has largely been around testing. So we actually started with a nice number of tests that were written by students of the University of Bergen. They were exported and cleaned up in a test 262. And then loaded the test plan, and new tests were written. These were moved from staging where we landed the original SpiderMonkey version into built-ins. So now all of the tests are in place. And with that, I would like to ask for consensus for stage three. + +CDA: Okay. I would just like to state for the record, I’m very much enjoying the… capybara riding the alligator?. + +WH: It is probably a caiman. + +CDA: May the record reflect that it is a Caiman. + +CDA: We have plus one from MF. Support from KM. Support from Dimitri. Now, we have a question from Mark. + +MM: Yeah, so in looking at the—can you put the API back on the screen? + +DLM: Yeah. + +MM: So `getOrInsert` clearly would not be a meaningful thing to add to sets. But `getOrInsertComputed` does seem like it would be meaningful. I’m not actually asking, I’m not requesting you to put it, to add it to set, I’m just wondering if that was considered and rejected for any particular reason? + +(overlapping) + +MM: Yeah, could not hear you. There is audio collision. + +DLM: Go ahead, Keith. + +KM: How would you get the key without building? So like per set— + +MM: You would have to—right. The callback function would simply be a conditional execution, and that answers my question. That is clearly very far from the intent of this. I withdraw the question. + +DLM: Okay. Thank you. So back to capybara picture, it sounds like we have support. Are there any objections to stage three? + +CDA: Nothing on the queue. I believe you have—sorry, go ahead. + +DLM: I was just going to say unfortunately, I didn’t prepare my summary or conclusions in advance but I will put those in the notes right now. + +### Summary + +Presented the work accomplished since the last presentation in April 2025, which involved merging existing tests in Test262 as well writing some new tests according to the test plan. Asked for Stage 3. + +### Conclusion + +Proposal advanced to Stage 3. + +## AsyncContext web integration update + +Presenter: Andreu Botella (ABO) + +* [proposal](https://github.com/tc39/proposal-async-context/) +* [slides](https://docs.google.com/presentation/d/1d64udRyuYplXajTCMQkGrVh2jff464_7D75dZ6pSxiE/edit?usp=sharing) + +ABO: So this is a web integration update on AsyncContext. The last time that we covered AsyncContext was in May. So as we’ve seen over the course of this whole saga, the web integration of AsyncContext touches a number of web APIs, and they are not trivial. And Mozilla’s DOM team opposed the proposal because of the complexity of the web integration, and the possibility it would introduce memory leaks. + +ABO: In the last plenary in May in A Coruña, we brainstormed the web integration, and we came to the conclusion that it should propagate the context as long as it is feasible to integrate in the browser. (I should clarify what I mean by feasible. Something we know is not feasible is ResizeObserver, since being able to propagate the context would require a huge refactor of the browser engine.) And Mozilla did not participate in the discussion, because the relevant folks in the DOM team are not delegates. + +ABO: However, the week after the plenary, Igalia organized, also in A Coruña, the Web Engines Hackfest, which some of you attended. This is an event that brings together people working on browser and JS engines, and also server-side runtimes. And this means that the relevant folks from the Mozilla DOM team were at that event. So we had an AsyncContext session with them in the room. + +ABO: By the end of the session, they were still concerned about the complexity and the memory leaks of the web integration, but they understood that having such an extensive web integration enables a lot of the use cases identified for the proposals. In particular, reducing a lot of the web integration to something minimal would not make sense. Like, it would reduce significantly the use cases of the proposal, such that it would not make sense, it would not add a lot of value. + +ABO: So now, it is time to analyze the implementation complexity. Some of it comes from APIs that can send messages both to destinations that are in the same agent and cross-agent. In particular, sometimes the sender doesn’t know whether the receiver is in the same agent, and in order for this to be safe to send and not leak, it’s not always trivial. So they asked us to investigate the following APIs and how we would implement that. Those APIs would be `MessageChannel`, `BroadcastChannel`, `IndexedDB` and `LocalStorage`. + +ABO: And we are currently working on a prototype of the implementation of AsyncContext and those APIs in Gecko—it is a simplified version of AsyncContext because, like, the whole proposal, and like, `await`, is explicitly not part of this investigation. We will be publishing a design doc for this to show that this can be implemented safely and without leaks. + +ABO: In terms of the memory leak concerns, we have also refined our memory management document that we have in the proposal repo, with better recommendations for the trade-off between the memory usage and the GC performance—like, should the context be a weak map? + +ABO: And so, the next steps that we have from now on were: to work together with Mozilla to resolve concerns they may have from the prototype and design doc once we have that ready. We were hoping that we could have it before the plenary, but it was not the case. Then discuss the shape of the changes to the web specifications with HTML editors and other stakeholders. Then open the relevant PRs to web specifications, and after that asking for stage 2.7. + +ABO: I don’t have a slide with summary because I was hoping to do that tomorrow. I can probably dictate that. So, well—is there anything on the queue? Or— + +MM: Sure, just, yeah, I just want to clarify. The thing that you, obviously, in doing all of this, you might find surprises. But right now, the stated expectation, I just want to confirm, is that the thing you will be asking TC39 for stage 2.7 is the current JavaScript spec and that all of these web integration issues are purely on the web standards side, it does not affect the JavaScript standards side at all. + +ABO: Yeah, so I guess, there is, like I mentioned the, the memory management document and what to do about whether the AsyncContext map should be a WeakMap. We will be adding a note to the spec to that effect. + +MM: Okay. + +ABO: There is one open issue, I believe, on, on the JavaScript spec which is whether generators should preserve the context across `next` runs. Yeah. But for, the rest, it would be the same as the current spec text. + +MM: Great. Thank you. + +CDA: Next? + +CWU: Yeah. I think this is worth mentioning that the HTML integration is not going to be ended up in the proposal specification text. But I think it is worth to have it together to be reviewed with the proposal spec text together to be advancing to 2.7. Because, I mean, that would helpful to access use cases with the proposal and on the web integration. + +ABO: I guess I should point out, it is possible that the spec has to be updated to add things that the web specs can use from the proposal. It would not be changing any of the behavior in the, on the Ecma-262 side of things. It is just adding algorithms for the web specs. + +CWU: Yeah. Sure. + +CDA: All right. Nothing else on the queue. + +### Speaker's Summary of Key Points + +* Since the last plenary, Mozilla DOM team asked us to investigate the behavior of certain APIs that send messages to both same-agent and cross-agent destinations. +* We are currently implementing it in Gecko, and documenting it to show that it can be implemented safely and without leaks. +* The result of this investigation will not affect the JS behavior of AsyncContext, only the behavior of web APIs when used with it, but it is a prerequisite for Mozilla to not block the stage 2.7 advancement. +* We hope this investigation will resolve Mozilla’s objections, leading them to stop blocking the proposal. + +### Conclusion + +This was a short update on what changed on the web integration side since the May plenary. No decisions were made. + +[image2]: diff --git a/meetings/2025-07/july-29.md b/meetings/2025-07/july-29.md new file mode 100644 index 0000000..0a5c201 --- /dev/null +++ b/meetings/2025-07/july-29.md @@ -0,0 +1,1214 @@ +# 109th TC39 Meeting + +Day Two—29 July 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Chris de Almeida | CDA | IBM | +| Jesse Alama | JMN | Igalia | +| Dmitry Makhnev | DJM | JetBrains | +| Michael Saboff | MLS | Observer | +| J. S. Choi | JSC | Invited Expert | +| Daniel Minor | DLM | Mozilla | +| Samina Husain | SHN | Ecma International | +| Aki Rose Braun | AKI | Ecma International | +| Shane F Carr | SFC | Google | +| Olivier Flückiger | OFR | Google | +| Jordan Harband | JHD | HeroDevs | +| Zbyszek Tenerowicz | ZTZ | Consensys | +| Eemeli Aro | EAO | Mozilla | +| Tab Atkins-Bittner | TAB | Google | +| Istvan Sebestyen | IS | Ecma International | +| Sergey Rubanov | SRV | Invited Expert | +| Daniel Rosenwasser | DRR | Microsoft | +| Rezvan Mahdavi Hezaveh | RMH | Google | + +## How to make thenables safer? + +Presenter: Matthew Gaudet (MAG) + +* [proposal](https://github.com/tc39/proposal-thenable-curtailment) +* [slides](https://docs.google.com/presentation/d/1_RCnI7dzyA1COgi_Ib7GkmHioV1nVSEMmZvy0bvJ8Kk/edit?slide=id.p#slide=id.p) + +MAG: So I’m here to talk about thenables again. So basically, this is not a particularly complicated conversation here. Because I come to the committee for your wisdom. I wish for you to bestow upon me what you think we should pursue. Because I’m not going to be able to decide which is the better option here myself. And I know people have strong opinions. So let’s try to see what is the direction to pursue. + +MAG: So, a reminder of what we’re talking about. So this is object with a then property. An object with a "then" property is considered to be a thenables, and treated specially in promise resolution code. The problem is that it’s too easy for security issues to happen. We walk that whole prototype chain to look for then, and there is code (particularly in browsers) that looks to resolve a promise that doesn’t realize it could actually be executing synchronous user code at that time. So I had in my previous presentation a list of security bugs and in the time since I presented that last, at least in Firefox, we have had 1 ½ more since then. So this is a recurring problem. So I would like to make some forward progress in trying to see if there is anything we can do. + +MAG: All right. So, originally, you know, I didn’t want to come with any particular concrete proposal, but I also didn’t want to come with nothing. So an idea that I brought was the idea of having a set of answers. So one being, object prototype becomes exotic and excludes "then" properties, this helps for sure with the problems we have. It would be nicer if it would address more prototypes than just object prototypes. You can make some option for promises to not respect thenables, so if you pass thenables it will not act like a Promise, that we can potentially expose that for HTML and put it into the web code. + +MAG: The third option I brought was the idea of having a new internal slot, which we call `[[InternalProto]]`. We then change the look up for the then property inside of resolution code to use some sort of new thing which we call, you know, get noninternal specification abstract operation, which walks the protocol change, but stops when it has the prototype. The idea here being anything that’s basically defined by the system would have the internal proto slot, so if you put a then on that it just doesn’t work as a thenables if you have an object that has that as its prototype. In my mind, this is one of the less invasive choices and would probably hit most user expectations. Mostly because nobody, we don’t have a lot of evidence of a lot of people putting "then"s on standard prototypes and I presented a little bit of them entry from last time, and the numbers are relatively low, not zero, although, I have since gotten telemetry for object prototype, and to a first approximation there are zero uses of then on object prototype. So when we do see it, probably someone is playing around with trying to write an exploit. + +MAG: So after I presented, MAH opened issue three on the repo and opened up a couple things. His analysis is good, I would encourage you to take the time to read it. I am going to summarize very briefly and badly. Number one he points out I missed a look up. We also lookup the constructor property. So basically while I kept blathering on them as the property. Constructor is actually in a similar boat. We do look up the constructor property. If you were to change constructor to a get, you would be in the same boat. And then he suggests that, you know, what if we could solve a larger scope of problem which is that, you know, there are other consumers of JavaScript who went to be able to resolve promises safely. + +MAG: And the end idea here being something along the lines of what if you were resolving a promise it would be synchronous if they had no dangerous thenables lookups, but if does have to look up something, then we delay it by one microtick / microtask turn. And then, that kind of solves a lot of our problems from the browser engineering perspective. + +MAG: Because it means that you resolve a promise and, ah-hah, if it could do something dangerous then the promise is just not resolved a new job is entered and it will resolve later. And for the most part this is going to be acceptable. We think. But, well that was the basic thrust, starts the resolve process, and it stops the current process and instead enqueue a new job. This is kind of a familiar thing. So back when they were doing early work on promises, we actually, or I shouldn’t say we, DD actually added a microtask turn to solve a very similar kind of problem, like 11 years ago. However, the faster microtasks proposal from Justin is trying to get rid of this tick in the general case. So we’re kind of walking across purposes here, adding and subtracting ticks. + +MAG: After I wrote these slides, apparently they my the rounds on Bluesky, the social network I have not signed up for. And we actually got a feedback early from Rich (Harris), creator of Svelte. This could break the way they have done cleverness. They want to have a reactive framework to handle `await`. They did, hey, what if "then" is a getter. If `then` is a getter they can hook it up to their reactivity system properly. In my case, if we were to add the safe promise clause and use it everywhere, it would break their code. Now, we can still pursue this, and Justin pointed it out in the issue, we can only deploy this for internal algorithms and HTML and put it through web and things like that. It is less general, but given we have a clear consisting consumer of the same look up, we probably can’t do this willy-nilly here. It’s the best we can do without going back to just saying, ah, this only applies to internal prototypes or something like that. + +MAG: Now, the actual ask here is basically what does the committee think about the extra ticks approach? And do you prefer it over going to `[[InternalProto]]` story. Myself, I kind of prefer the `[[InternalProto]]` story. But there are polyfill issues there. And then there is the awkward thing in order to make the polyfillable, you probably went to ability to set the intern is slot, that’s very awkward to me. Are don’t know what that really looks like in way that makes any sense to any JavaScript programmer other than the three people who will polyfill this. So that’s the end of my presentation. And now, we can really go to the queue and have a discussion. + +USA: All right. Thank you. First in the queue we have Kevin. + +KG: Yeah so you mentioned the, that those two observable lookups, there is the dot constructor and then the dot then. I want to mention that I think that the `.constructor` is probably fixable separately so that look up is only performed for things which pass the `IsPromise` check. The point of that part of the spec is to have a fast path when something passes `IsPromise` and its `.constructor` is the intrinsic Promise. I think we can probably change that to, something passes `IsPromise` and its `__proto__` is the intrinsic `Promise`.prototype. But because it can’t be a proxy if it passes the `IsPromise` check, this would mean that there’s no observable steps there. I think that probably is web compat; can’t guarantee it, but it is probably web compat and it gets rid of that particular observable look up. It doesn’t do anything about `.then`, that we still need to address. But yeah. + +MAG: Okay. That’s good to hear. That’s a nice simplification of the problem space. + +USA: Right. Next to MM. Who says prefer extra tick if possible, end of message. + +MAH: Sorry. I want to reply to Kevin’s analysis. So looking at this, I agree constructor is easier to deal with than "then". + +USA: All right, going on to the queue then. + +MAG: Can you repeat what MM said, I didn’t hear what MM’s message was and I don’t have the queue open right now. + +MM: I can say it. You said "do you prefer the extra tick approach". And I just, yeah, since you were asking for reactions, I just stated that I prefer the extra tick approach if it is possible and sounds like there is a lot of complexity around the idea that if, that it is possible. But you sounded very hopeful that it is possible. + +MAG: Okay. Thank you. I just hadn’t quite heard the repetition of it from the queue. + +USA: The next up, well, we have reply by WH. + +WH: I thought I just heard during the presentation that people are deliberately using getters on `then` in existing frameworks. So how is the extra tick approach possible? + +MAG: So this is new, new information that they are adding thenables. I get the impression from Svelte this is something they added very, very recently. But we could conceivably do the delay in other cases. So, for example, the specific case that Svelte cares about is when you await a promise. So we could divide the behavior between like doing promise resolve and await. Similarly, we could cut it and say that we have extra ticks when you do a user-defined look up up to the prototype, or sorry, we could do a hybrid approach, where essentially we say we add extra ticks when you hit internal prototypes and define that some way. Like, there, you could conceivably keep chasing the idea of extra ticks, but we would want to try to not break what Svelte has essentially just built their dependency, they do a synchronous look up when they await something. I think that is possible, but it does introduce complexity and challenge. You know, MM’s preference is the extra ticks, but I think it becomes a little bit more of a delicate design thing. And the end result might still be kind of cludgy and ugly. But my general thing is I only really care about standard prototypes because those are the ones that ends up causing problems for us. If you—you know, any random class having a `then` doesn’t really cause any problems on the web platform. What causes problems is where it’s, some specification defined type becomes a thenables and you didn’t expect that. And it becomes a thenables almost always because someone put "then" on object prototype. And that’s where it gets very awkward. And, you know, we have to, right now we have to really sort of capture this thing and sort of review unfortunately, and the web platform has extra tests in order to catch this case, like there is special readable stream tests like hey, if someone does this and you have a readable stream. It could be kind of nice if we could get rid of this whole class of problem and be able to as a web platform engineer, say if you resolve a Promise with a type that is specification provided type, you know it is not going to execute synchronous code, because nobody can add it put a then on it or it has no practical effect. + +WH: Okay. So what I’m hearing is that the pure extra ticks approach does not work whereas the pure “not recognize thenables on built-ins” approach might work. Given that, I would rather work on not looking up thenables on built-in prototypes. + +MAG: Me too. + +USA: There’s a number of replies that have stacked up on the queue. So I do recommend going to it quicker if possible. Next we have Matthew. + +MAH: Yeah. Really quick. So the feedback from Svelte makes it seems an implicit extra tick on the resolve operation is not compatible. However, we can still imagine an extra tick for explicit `Promise.resolve` being called. But I don’t consider that a sufficient defensive mechanism, because it would only apply to explicit places where people call from `is.resolve`, which is never, unless people start being defensive. That gets to another point I have later on the queue. So— + +USA: All right. Next on the queue we have Kevin. + +KG: Yeah. We talked about, Svelte only just started doing this, but I bet other people are doing this. I’m confident we can’t do the extra ticks in generality. + +USA: All right. Moving on. Next we have JRL. + +JRL: Okay. So to explain the extra ticks solution here fully. The implication here is that every time you `Promise.resolve` with an object or `new Promise` and use that resolve function with an object, or call `.then` and then return an object in the callback, you’re going to incur an extra tick. I do not consider this acceptable, because it essentially undoes all of the work that we did for faster native await. The fact is that promises are mostly single reads. You create one promise and await it a single time in one location. We are making it so creating that promise with all objects is now one tick slower. Which means we’re creating the promise requires one tick and reading the promise requires another tick. That means we have two ticks to native await the value, undoing the work that we did. There is a potential solution we could do here. We actually look at if there is a then property in the prototype chain, or in the prototype chain we have a proxy, and only wait in these cases. But in discussion with MAH and MM, that wasn’t acceptable, because it gives you a way to detect if something is actually wrapped in a proxy. So there is no way to actually tell if anywhere in your prototype chain, safely tell, if there is a dot then property, so we need to defer execution before the access. So the implication here, all objects, every time you create a promise and pass in an object, it all gets slower. That’s an extremely common use case. Not every promise is going to be holding a primitive. I don’t think making all of this code slower is a good solution to any of this. I would much rather us pursue hardening the very few spots in our spec that require actual protection and only make those slower because there are security vulnerabilities. + +USA: All right. If you want to respond to that. + +RGN: I’d like to object to the characterization of extra ticks as slower. That’s not mandated by the specification. The concept of a tick comes into play immediately after the stack gets drained. And what happens now versus immediately before any asynchronous IO can happen is not fundamentally slower, although implementation choices in some cases can make it so. And when counting ticks, it’s not fundamentally a performance consideration, it is about observable sequencing. And with promises we tend to discourage anything that relies on observable sequencing. So I don’t think this ought to rule anything out. And in particular, I would like to object to the motivation of keeping tick counts as low as possible. I think we, instead, should value more highly that code is as predictable as possible. + +JRL: Why did we do all of the work for native promise await? Or trying to combat bluebird library, which was considerably faster for IO ops in node. It's because bluebird doesn’t have all of these extra ticks. Like there are so many promises, millions of promises created in an application. If we make them all slower, the application will be slower. I absolutely disagree. I think the promises performance is as important as the ordering. And the ordering of the promise resolution is still guaranteed. + +USA: Next on the queue, a reply by KG. + +KG: This is a response to RGN. It is true that a smart engine could make these things just as fast. I think it is easy to underestimate just how much work a sufficiently smart engine would have to undergo to do that. And they’re just not that smart. And I don’t think they are going to be. I don’t think we should reason about, you know, just because it is possible in principle that a sufficiently heroic amount of work could make this just as fast, we don’t have to care about performance. The heroic work has not been done and is not going to be done. Stack switching is always expensive without work that no one shows any signs of undertaking. I don’t think we should punt to sufficiently smart engines here. They're just not. + +RGN: To be clear, that’s not what I’m doing. And a lot of the performance implications, the slowdowns that we see now, aren’t due to stack switching. They are due to the creation of new promises and new promise revolvers. And that could totally be eliminated for the scenarios where nothing is actually listening. + +KG: I agree that a sufficiently smart engine could make these things equally performant. My thesis they haven’t and aren’t going to. We shouldn’t reason like we’re in the world where they have. + +RGN: Sure. + +USA: Let’s move on, oh, no, we have another reply to this topic. JSL you are next. + +JSL: Yeah, just want to, you know, just—just agree with the comments that adding these additional ticks, yes, the—the predictable behavior is good. But these things do have, it doesn’t matter how much, adding ticks does have a very real performance cost in Ag oh, aggregate and node and run times. We have to be really careful here. The prior work that was done to optimize the ticks out of await yielded major, major performance improvements in ecosystem. And regressing on this a little bit is going to cause a lot of headache. + +USA: Thanks for that. Sorry. I didn’t see that in the message. We have three more topics left and under 10 minutes. So this is mostly just discussion, but so—I’d like, I’d like to issue guys to go faster. Next we have Matthew. + +MAH: Right. So if you could go back to your proposed solutions. I believe that the number two, if I remember, can be somewhat generalized and merged with my suggestion which is to basically have an operation, an internal operation that says a safe promise resolve or gate safe promise capability that host could use or spec places could use where basically if they encounter an object or place that is potentially unsafe and that would trigger the code, it would delay by an extra tick. That would require all of the places that basically handle user objects in the spec and in the host to start using these instead of the regular promise operations. But that, that would be one possibility. I am. I would be sad about that because that—that basically requires defensive programming on the part of the host and the spec. And that would also not give us something that user would be able to do easily. In which case, I would also like us to consider something like `Promise.is` promise so that a similar difference in programming can be done by user libraries. And `Promise.isPromise` would simply check if the branding of the promise, of the object see if it is a native promise. + +USA: Okay. + +KG: Okay. I think that that probably does solve the problem in our sense. But we mostly care about built in objects which are not expected to have then properties. These are spec algorithms that create an object themselves and then resolve a Promise with that object. It seems like it would be a shame to make that operation slower for all of the spec stuff just because we have to be defensive against the possibility that someone has put `.then` on `Object.prototype`. We are not handling a user object. We are handling a spec object, but the spec object inherits from something to user could have touched. I don’t love that solution. + +MAH: Yeah. So my understanding is that there is three different, at least two different cases. One is the spec or the host creates an object and uses it internally through the promise machinery and is not expecting to have any interference through the bit in prototypes and the other one is that the spec is handling some user provided object, but you’re saying in those cases it would be rare for, for the spec to expect not reentrance in the first place. + +KG: My understanding is that the main concern of this proposal is the first case, not the second case. + +MAH: Yeah. I’m—I am—encountered a lot of cases in the second case. So that’s, yeah. In userland. + +MAG: Yeah. That’s a little bit of a divergence, you and I, we’re not solving 100% the same problem, I’m more focused on the first case, which are newborn objects that happen to inherit from object prototype. + +USA: Next we have a new topic by KG. + +KG: Yeah. So I’m coming around to your `[[InternalProto]]` suggestion. There are definitely details that need to be worked out. But before we go down this route, this would be adding an extra bit to every object. How do we feel about that, or how do engines feel about that, I should say? That seems naively like it would be kind of expensive. + +MAG: I think. I can’t be sure right now, because it is not that costly to store. And even if we wanted it to be like extremely low cost lookup, I think we could accomplish it with relative ease. And I, of course, say this knowing full well that I could be bitten when I come to implement this. But I do think it is implementable at a sufficiently low cost that I would be willing to go ahead and try to build it. + +KG: Okay. I guess, given all of the foregoing discussion, that would be my preferred resolution. I think we talked about how to go about making this opt-in for objects, my preference would be to have a way of creating an object that is of this kind. Not a way of changing an arbitrary object to have this slot. Like just an `Object.createNonThennable` or something. And then, not have to worry about someone setting this bit on an existing object that I handed them, that sort of thing. Also, there’s the question of does this pierce proxies; I think probably it would. Which means that, you know, you could proxy one of these things in the prototype chain, and because you would be checking this bit before you did any operation on the proxy, you would still avoid user code. But—yeah. Details would still need to be worked out. + +USA: We’re almost on time. But let’s try to get through the rest of the—or, well, no, I don’t think we can get to the queue, unfortunately, would you like a continuation? Or do you think this is sufficient? + +MAG: I think if we could have a continuation, I would try to make it—yeah, we should try to have a continuation if possible. + +USA: For like how long? Do you think? + +CDA: I’ll work it out with them as well. + +USA: Okay. Great. Let’s continue with the queue then. Next we have Keith. + +KM: Sorry, are we continuing now or are we just— + +USA: Good question. Yeah. Let’s schedule it for later then because we the fourth day. And today we have a full day planned. Great. I’ll capture the queue then unless somebody already has. And all right. + +MAG: Thanks, everyone. + +### Speaker's Summary of Key Points + +* While some delegates are in favour of extra ticks, some delegates object strenuously particularly for performance reasons. We have multiple delegates indicating performance of extra ticks may be unacceptable. +* There’s some discussion about the “constructor property”—it is claimed that a spec refactoring can likely remove the constructor check. +* In a sense, there are two different problems being put under one putative solution banner: The ability to safely resolve promises coming from arbitrary user code, and protecting code operating on newborn objects that are unexpectedly thenables due to prototypes. + * MAG is more interested in the latter, MAH the former. +* Conversation didn’t conclude in this section, but we’ll continue in overflow. + +### Conclusion + +(conversation continues) + +## Error.captureStackTrace + +Presenter: Dan Minor (DLM) + +* [proposal](https://github.com/tc39/proposal-error-capturestacktrace) +* [slides](https://docs.google.com/presentation/d/1RGNDcJee_6N2II0SeGyMVglvjhJbuqDX02LLzBqyCWI/) + +DLM: I would like to talk about tape `Error.captureStackTrace`. This has existed in chrome for a long time and it is shifted by the other browsers watch it is doing is capturing StackTrace information on provided objects. Here is an grab from MDN. This is a capture StackTrace and putting it on a custom error object. Brief bit of history. So, in depths of time, chrome shipped error capture StackTrace, there are evidence that goes back to 2015. More recently, Safari also chipped this message, we did as well, we because we started to encounter web problems, and Matthew presented this for stage one in February. So, works, in stage one, well, because any one has shipped the design space is very small. + +DLM: Some of the feedback we got from last time this was presented is that there is a preference to install property rather than using accessor and we should try to queue any special behavior for objects that are errors like we have an error data slot. One little chunk of work I did was to investigate which whether or not we could make this the instructor callable. And 1% of calls are broken, given the fact this existed for such a long time, doesn’t seem to be worth the risk. + +DLM: So some proposed specification text here, which is more or less based upon what we have in SpiderMonkey, but I believe this is what Safari has mounted. An object, zero type error, the the instructor is callable, we create an implementation with the StackTrace, and remove the StackTrace with the top calls from the instructor. This is to hide implementation detail that is not useful to the user. If the constructor is not there or callable, we create to new string that represents the current StackTrace and installs it as a property. + +DLM: Recently, the V8 team opened this issue. [Number eight (tc39/proposal-error-capturestacktrace#8)](https://github.com/tc39/proposal-error-capturestacktrace/issues/8). So there is interact with another API, that is prepare StackTrace that involves custom of the stem. So basically right now they are using a getter and able to complete the stack and don’t have to do the work involved in preparing the StackTrace unless someone is actually going to look at. As currently specified with the property, we start computing the stuff eagerly, which is potentially expensive and would change the ordering, depending, of course, on the implementation of prepareStackTrace. If you look at that issue, it is interesting because the, kind of, delved into the history a little bit and add one point it actually was a property in V8 as well and they made change to be a getter first because of the cost, of StackTrace, and second, they didn’t want the property read to have side effects. So, what my hope was for today is basically to discuss issue eight and get feedback from the committee. From our perspective, SpiderMonkey’s perspective, what we want to do is see this get standardized, since all three browsers are shipping and I believe our implementation aligns with what Safari has. But still doing two different things. I believe design space, whether proxy or getter, they are web compatible. Because people are shipping different things. So basically, I would like to see, get advice from the committee as to what we think the best approach here is and to see if all of the browsers can agree so that we can standardize this. From that point, yeah, I would like to see if there is any feedback on the queue. + +MM: Yeah. This is more of a question than feedback. The `Error.stack` proposal would move the, would place stack, stack access or property on `Error.prototype` V8 seemed, since the V8 own access or property per error instance all of those have the same actual getter and setter and therefore, clearly, they must be accessing an internal slot per instance. And there wouldn’t, so Google seemed receptive to the idea of moving that accessor up to the prototype. The, because you’re using setter that has prototype properties, this makes a difference because if it’s an own accessor, then setter that ignores prototype properties would invoke the setter. Whereas if it is an inherited accessor, then setter that ignores prototype properties would ignore the accessor and create a new data property. + +DLM: Okay. + +USA: All right. If you’d like to move on with the queue. Next we have OFR. + +OFR: Yeah. So thanks for bringing this proposal forward and thanks for already including our comments. So that makes it easy to talk about them. Just wanted to give a bit of a background. So indeed, it is a concern that we have, when these effects from prepareStackTrace are actually happening. Unfortunately, I don’t have data how often this would happen. I have found interesting instances where this does happen. For example, there’s a node package that downloads source maps to kind of format the StackTrace in the source language. That's one thing that I found. So for us there is a concern that if I did this eagerly, it could potentially cause issues. The other thing is, I looked into the history of this accessor in V8, and actually it turned eager at some point, as you mentioned, and then we changed it back to lazy. And the reason was not performance issues with prepareStackTrace, but really performance issue with captureStackTrace itself. Even just computing the default string representation of the stack is something that is expensive. And so, if a library decides to just augment all of it’s custom error objects with this API and some client program uses that library, and never ever looks at the stack property, then you would still have to pay the cost for that. So in, I believe, this was sometime in 2017, where we actually changed this back to be lazy. And decided to go with this accessor solution. So, yeah. Overall, we would have a preference to be able somehow to keep this lazy. To avoid having to do this round again where we find out, actually, we will get complaints if we make it eager. I saw on the notes from last time there was opposition against the accessors. But I didn’t actually figure out what the opposition was. So I guess my goal here is to also try to figure out what would be the things that people are concerned with. + +USA: All right then. Moving on with the queue. We have a reply by MAH. + +MAH: Yeah. So I’m not a fan of the complication of specifying accessors for this case. But I described the way—in the GitHub repo how it could be done in a way we consider safe. We have problems with V8 current implementation of accessors for error stack. But we have a description on how we can avoid that problem there. I’m actually going to stem a little bit on KM's point here that because I wanted to bring it up. Similarly, like you can still have lazy, lazy behavior by specifying data property and keep it as an implementation detail. The only reason this is observable right now, V8 as a prepared StackTrace hook user hook so that becomes observable. However, that is not part of the spec. If we were to ever specify a mechanism similar in principle to prepareStackTrace for userland to be able to personalize or do some filtering on how the stack contents, string properties created, then this would become observable and if we ever want to do that in the future, that means we cannot specify this as a data property today. We would have to specify it as an accessor so we don’t have caustic, we don’t is to specify exotic behavior on data property access. + +OFR: So maybe a direct reply to that. This is actually the first in implementation we ever had. This caused a security issue where we exactly didn’t handle the case where user behavior was involved when accessing this data property. So we would also like to avoid doing this mistake again. + +USA: Sorry. Next on the queue, we have a reply + +KM: Yes, this was covered already. Basically same point here, I do agree if you were going to prepare a StackTrace, having the value getter is like, as an implication detail is bizarre, because you’re having a callback, something that is supposed to be just like a nonreentrant operation. So I mean, I’m open to either, I don’t have too much concern one way or the other if we have to change our implementation to be a getter, it doesn’t really matter. I do wonder if we were to add prepareStackTrace, I expect we would not use the same API for reasons, like this prepare StackTrace API would add something new, I guess. Because it is like a global property that gets looked up, I think that is generally frowned upon for these type of things these days. So I don’t know if we should necessarily be like—planning around prepare StackTrace, but obviously we have to support V8’s existing implementation of it. Right? So, just sort of a trade-off there, I think. + +USA: All right. OFR, I assume this was your topic? That was on the queue. So in that case, let’s move on to Jordan. + +JHD: Yeah, I mean, so in a previous presentation for captureStackTrace, it was explained that this proposal is needed for web compatibility and interoperable and make sure no one makes it even worse. We talked about all of that. So fine, we need to specify captureStackTrace, but we shouldn’t, it shouldn’t exist. It exists because some, one browser implemented it and then a benchmark made another browser do it. So given that it shouldn’t exist, I think we should be really careful not to do bad or weird or inconsistent things to make it exist. Including prepare StackTrace related stuff that we should opt for whatever we kind of most straightforward and like—consistent approach we can is. And assuming that it is web compatible to make that happen. I don’t have a concrete opinion what that path should be. But let’s not bend over backwards so that something that shouldn’t have existed in the first place can land. + +USA: And finally, we have MM on the queue. + +MM: Yeah. Since this involves interaction with two proposals, let’s consider this a question. Both for DLM and for JHD. The error stack proposal, as I mentioned, just puts an accessor on `Error.prototype`. Which thereby gets inherited by anything inheriting from `Error.prototype`. What the accessor does is for error objects it accesses the internal slot. If this, if capture StackTrace was specified in terms of changing that internal slot, that would be, it would be not surprising for the consequence of that to be lazy, because nothing’s observable until the getter is invoked. The cost of that is that captureStackTrace would only be usable on error objects. Whereas the existing StackTrace is something you can do to other objects. I don’t know if that’s accidental or intentional and how important, and the intentional, how important it is to be able to capture StackTrace on non-Error objects. + +DLM: Yeah. I guess I will start, that is very valid. It is not obvious to me whether it would be web compat or not to restrict this to things which are error objects. If we were to pursue that path, we would have to do research first. And they just, looking at the—when I looked at telemetry to see if we required a constructor to be capable or not, there were enough people doing weird things with that, that makes we think it would be difficult to restrict things to just being error instances, I’m not sure the history, if that was an intentional design decision or not. It is quite possible it was. + +MAH: My understanding is that this is used extensively on non-Error objects. So capture StackTrace would have to install an internal slot. In this case. + +MM: I would not, I would not be? Favor of having captureStackTrace install an internal slot on non-Error objects so it would be adding an internal slot. On the other hand, I will point out given inexpensive applies to private, the proposal I will be speaking to next, on non-extensibility would present a property, a data property, any owned property being added whether it is a data property or, you know, or an internal, well, actually, that’s not correct. I would present the private field from being added, the proposal was silent on adding internal slots. But generally we don’t add internal slots to objects that are already created. I would like to continue to not do that. + +MAH: Yeah, that would be, to be clear, that’s the only way an accessor-based approach would work if we want to support arbitrary object is to add an internal slots because there is effectively no other way to add internal private information to the an object instance, that’s the information the spec has. It would be better if it behaved as a private field that would be stinted and subject to the—( indiscernible ) proposal. Maybe we can make the private slot stamping similar in this case. But an accessor would have to, be definition access something internal to the object that gets added to it. + +USA: And that was the queue. + +DLM: So can I take it then, MM, you would be opposed to an accessor? + +MM: I think it is complicated. There are certainly ways to do an accessor that I would be opposed to. But I’m not ruling out that there are ways to do an accessor that would fit all my constraints. I’ll make my standard suggestion for proposal touching on these issues. We can continue this discussion TG3 between plenaries, the TG3 audience would be very interested in this. + +DLM: Yes, I think—I think that would probably be the end result of this. Got to—I get the feeling that the V8 team would be opposed to using a property, which means we would have to figure out the details of an accessor. I guess my other question, before we do the work, is anyone completely opposed to an accessor? It would be nice to hear that now, rather than come up with a design that meets most people’s concerns, but then come back to plenary and find out that other people are just not comfortable with that. There’s more queue. KM, would you like to speak? + +KM: Yeah, I guess just as an implementation detail. Having it be a private field would be awkward, because our objects are sort of stamped if they have private fields when they are created. It is probably possible to change that, but it would be possibly nontrivial. So like the—the, an internal slot in the spec is probably easier for us to do. Simply because that’s easy, we can just, we put a slot somewhere and just—make it invisible. + +ZTZ (on queue): Doing `let a = {}; Error.captureStackTrace(a); a.stack.split('\n').forEach( …)` in Node.js is not unheard of. + +USA: Next on the queue—oh, well, that’s code. But it is not unheard of, this thing, we can put it in the—oh! And Mathieu replied, but anyhow it is not unheard of it in no GS passages. End of message. And you need to support private fields due to write error. End of message. Next we have OFR. + +OFR: Yeah. I guess this is more of a question, I’m actually not entirely sure how we would actually deal with this extra prepareStackTrace behavior that is out of scope of this proposal. It is kind of weird, because the captureStackTrace proposal is very Literal about when the string property is being constructed. So given that wording, I kind of would see no wiggle room for us to stay like, we’re just going to compute that string at some other point. But in the end prepareStackTrace is not spec’d at all. So it is kind of hard to even know what would be the most spec compliant way of implementing it. + +USA: Right. And then, on the queue—ZTZ? + +ZTZ: I wanted to double check whether it is worth considering because I know it would work. Although, it is a bit weird, to have a getter exist on the error prototype for captureStackTrace and for purposes of reading the stack. So if you do `Error.stack` it would reach to the prototype and invoke a getter and then the getter, other than returning the stack would put an own property in the error. So it is a one time getter that over shadows itself. And that would eliminate at least the trouble of having an own property getter, which I think it is, the biggest issue. Maybe not the only one. But is this an improvement? Is this acceptable? + +MAH: ZTZ are I think that is somewhat the behavior I’m proposing. This is adding a to the prototype, capture StackTrace is not adding to the prototype. The behavior you’re describing is the `Error.prototype.stack`. Which is a different proposal. + +ZTZ: I mean, it would work transparently for the case when you’re triggering the accessor. You could run the same exact implementation in the background when you call captureStackTrace. And it would be the one implementation that exists on the `Error` prototype. And then you bind it to whatever object captureStackTrace is being invoked on. + +MAH: My suggestion earlier was actually that if we go the route of an accessor, we could actually, and an internal slot, we could actually have the exact same accessor functions being shared with `Error.prototype.stack` and those be added to the objects as a known property. They not just share the same information, they can share the same access functionality. + +ZTZ: Yeah, but then the conversation went into discussing necessity of having the internal slot on regular objects which are also being passed to captureStackTrace. And if we used captureStackTrace by invoking `Error.prototype.stack` getter, but bound to whatever object is being considered through an argument of captureStackTrace, it would behave the same way and seamlessly work for regular objects, too. + +MAH: As long as you add the private field. Yes. There’s no way around letting a private field or, or a private slot or a slot to regular objects. If they’re used as a target of `captureStackTrace`. + +KG: Yes there is. + +MAH: Okay. I’m interested. + +KG: You don’t use the same function value for the objects. You just install a fresh closure on the object. + +MAH: Okay. I see what you mean. Okay. You have to create a closure for every getter then. The setter could be shared by just, yeah, anyway. Okay. Got it. + +USA: Then there’s around 5 minutes remaining. + +DLM: I’m happy to follow MM’s suggestion and take this to TG3. Assuming we can get the right people involved in this discussion there and have more today. I do have one broader question if I could ask that before I end, does the committee think this is actually worth pursuing or leave this as a regrettable thing that everyone shipped and agreed upon? Like I don’t know, I just like to hear, do people see value in actually standardizing this? Or is this taking time that could be used for other proposals that have a better impact? And I—silence— + +MM: I think I will just go on the audio track rather than trying to write something in TCQ. The, I would—I would prefer to capture StackTrace also not exist, but I don’t have a hard objection to it. The, if it can solve, if we can solve all of these messy problems. If it’s going to be part of the de facto standard of JavaScript in the sense that over time all implementations feel obligated to implement it, then I would feel strongly that we should standardize it. The same reason why once dunder proto started becoming universal, even though I hate it tremendously, I advocated that we standardize it. Because everything that is universal is better off having a well-thought out and agreed semantics, which is the job of the standards committee. + +DLM: Okay. Yeah, there’s a few agreements on the queue that I can see from ZTZ and MS(?). I will take that as pursuing this. And yeah. I think I’m happy to bring this to TG3 and hammer out details. + +USA: And also support by KKL. All right. WH, would you like to speak to that. There is no more than two minutes left. + +WH: I have a question for MM: What should we do if we have existing practice in which some implementations choose to do it as an accessor and some as a property? + +MM: So the mandate of don’t break the web is only the multi-browser web. So if existing browser implementations disagree, then we are, and we’ve done this in the past, then we can—you know, that gives us some freedom. We can’t break that functionality that works across browsers. But in order to standardize, and with the veto of the browser members as a possibility at TC39, anything that we can get agreement on that doesn’t break the cross browser web is something that we have gone forward with. And I would, I think, that’s the right stance. + +WH: Okay. If it is anything, it is also okay to let an implementation choose to do it one way or the other? + +MM: So, I—we can, we, we occasionally leave things to the implementation, but that is always, I think, a terrible last resort. The purpose of meeting as a standards committee is to reduce gratuitous differences between browsers. So if the browsers continuing to diverge is based on anything other than inertia, then I would hope we could get to agreement. But inertia’s powerful. And the browsers are represented on the committee. So they can say that they object of coming to an agreement. + +WH: Okay. I also had another item on the queue for quite a while which got deleted: If it is an accessor, if you call it multiple times is it guaranteed to produce the same result? + +MM: So for the spec mandated accessor, we’re free to specify that. And yes. I would, I would certainly hope that we would specify that the answer is yes. + +WH: Okay. + +### Speaker's Summary of Key Points + +* Discussed various options for using an accessor (shipped by V8) or a property (shipped by JSC and SpiderMonkey, and the constraints on the behaviour of an accessor if that was the option chosen. +* Did not come to a decision about accessor vs. property. + +### Conclusion + +* Will bring this discussion to TG3. + +USA: All right. That is all we have for this time. Thank you, Dan. + +DLM: Thanks. + +## Non-extensible Applies to Private + +Presenter: Mark Miller (MM) + +* [proposal](https://github.com/tc39/proposal-nonextensible-applies-to-private) +* [slides](https://github.com/tc39/proposal-nonextensible-applies-to-private/blob/main/no-stamping-talks/non-extensible-applies-to-private-update.pdf) + +MM: So previously, we had brought non-extensible Applies to Private to the committee and we rapidly got from no stage at all to stage 2.7 which all of us really appreciate having been able to do that. Today I was going to ask for stage three, but reality interfered. So this is just a 2.7 status update to bring up some issues that I hope we can discuss so that in a future meeting I can bring this back and ask for stage three. So to recap, the green, the two green highlighted lines are the entirety of the proposal itself. Which is to ensure that if an object is non-extensible, then in the same way that you cannot add public properties to a non-extensible object, we are here proposing that you cannot add private fields to a non-extensible object. + +MM: And to recap motivation, here is a little abstraction called tagger that used return override in order to add a little private tag to any arbitrary object. And SYG's motivation, SYG and I are cosponsors of this, SYG's motivation primarily has to do with wanting high-speed implementations of structs, with optimization depending on structs having a fixed shape. And within the current spec, it’s not possible to implement structs with fixed shape because there’s nothing to prevent one from using the tagger abstraction to add a tag to a struct instance. That would in realistic high-speed implementations cause its internal shape to change. So with this proposal, because struct instances are born sealed, the attempt to add a private field to it, which happens right over here. Would throw a type error. And you get the same safety that you would have gotten if you were adding a public property. + +MM: By the way, I should say, I’m still hopeful that structs, not shared structs, but just normal structs, normal unshared structs, I’m very hopeful that does go forward, so I do share this motivation. But for me, this is the primary motivation. It’s the exact same tagger abstraction. We, Agoric, implement a virtual object memory system in JavaScript where there are a particular class of objects, we’ll call them virtual objects, that where you can, for example, have more—have a virtual collection that is larger than your JavaScript heap where the virtual collection is keyed by virtual objects and it spills out to disk. That when the virtual objects are referenced, they get so to speak, pained back into disk, from disk, creating a representative which is a JavaScript object. So weakness aside, representative each time it gets faulted back in, it is represented by a different JavaScript.rep, because no two representatives of the same virtual object are in the same heap at the same time, aside from weakness it is undetectable that it has a different JavaScript object identity. To deal with weakness, what we do is we replace the global WeakMap and WeakSet constructor and we hide, we could also replace the global weak ref and finalization registry. And by replacing them with things that aware of our virtual object system, we’re able to ensure that, that the change of identity of the representative at the JavaScript level is not observable. The hole in that story is demonstrated using the exact same tagging thing. That is because return override with the current behavior, means that there’s a primitive weak map effectively, weak by syntax in the language, which is the only way to account for private fields with return override. So over here, if you tag representative one with A and you get it while it is still in memory, while you are still holding onto to representative one, you get back to A. That is fine so far. But then if representative one is garbage collected and the virtual object is trying to transparently revive it, creating new object representative two, then because it has a different JavaScript object identity, this is the one place where that difference of identity remains observable which breaks the virtual object memory illusion. + +MM: So with this proposal, and given that our representatives, one of the constraints we already have in our eventual object system for other reasons is that, the representatives are always frozen, then the tagger immediately fails to tag a representative closing the one hole that we could not fix in the virtual object memory illusion. + +MM: Okay. Recap plus-plus, last time we brought this to the committee, we were showing this graph as of April 1st. So I redid the slide going to the current stats on the Google web page where Google is accumulating these stats. And as we see, for this graphic, it seems to go down, which isn’t necessarily good news. But at least it is not bad news. I say it is not necessarily good news, because it's probably within the error tolerance, but obviously the linear best fit continuing to go up, up is noise we can ignore on, with regard to this piece of statistics. The other piece of statistics on Google website is more alarming and I would like to work with somebody at Google to understand what this means. But it’s showing an increase in June over where we were in April and May. And then a decrease from June to July. But still a notable increase in July over May or April. But all of these numbers are, are still very, tiny small. And I don’t know of anybody who has reported any source of these anomalies aside from the one that SYG reported as of our last meeting. So SYG reported two anomalies. One of which was that there was exactly, whoops, sorry. I clicked. Exactly this pattern being seen on one website and it was mysterious to us at the time why anybody would write code like that. + +MM: And then on the issue number one thread, on this proposal, James pointed out that the anomaly we were seeing was exactly that, that would be generated from Babel. So now, on taking that same observation, but throwing at it a slightly more challenging test case, which is the code on the left. From which Babel today generates the code on the right. So, this led to a—and the code on the right is code that does break—if, you know, once our proposal becomes standard. Our proposal would break this code on the right. We call the code in the middle, because after a lot of back and forth, especially with thanks to NRO and RGN, as of this morning, just shortly before my talk, NRO proposed the code on the right as an alternate translation that seems to be fully spec compliant on, with the current spec and would remain spec compliant following this PR becoming part of the language. So thanks again to NRO and RGN for that. + +MM: And that is it! So, so I will now stop recording. And take questions. I have now stopped recording. + +USA: Great, first on the queue we have JHD. + +JHD: Yeah, I just wanted to confirm, it just occurred to me while looking at that slide that was just showing that this won’t actually explain why you can’t add private fields the window. Right? It’s just whole new class of objects that you can’t add private fields to. + +MM: That’s right. That’s right. + +JHD: Thank you. + +MM: Window was a—it is a called out exception or specifically the browser global object is a called out exception only for browsers. And where this proposal started was new integrity levels where a new integrity level that was not non-extensible, but purely independent of extensibility, could have retroactively rationalized the browsers have a new integrity level. But SYG and I extracting the proposal, so it depends on the existing level of integrity level, gives us intentionally rationalizing the browser Behavior. + +NRO: Yeah, I checked out tools. This looks like SWC has the same output as babel. So I submitted an issue to them. I don’t think another tool has the same problem, because the other tool supports cross blocks without also transpiling static fields. So this is the tools and the tools are where. And fixing will be done hopefully soon. Finished. + +MM: Okay. + +USA: Right next. WH. + +WH: The Babel translation seems really nasty. + +MM: You wouldn’t believe the Babel translations before that one. We get through compatible, but much nastier translations. + +WH: It creates extra private fields which you didn’t have before. And— + +MM: No it does not, let me show it again. Hold on. Sorry, I—I— + +NRO: The old one does create extra fields which was observable, the new one does not. + +MM: Yeah. The new one, the one on the right here. The one, the current translation creates the private field. NRO’s proposed translation gets rid of the extra private fields. So were currently living this. I don’t find this significantly nastier then this. It certainly is a little bit harder to read if you’re manually looking at the code. And it kind of reminds me of the Peja (?) conservation thing, this is the wide short water, and this is the tall thin the water. But yeah. I think, in fact, you know, NRO’s the Babel guy, and he’s happy with the translation he’s proposing. Do you find the translation on the right significantly less acceptable than the translation in the middle? + +WH: No, it’s the other way around. I don’t like the middle one. + +MM: Oh, in that case, great! About that case, you would be happy for us to go forward with this proposal and in coordination with that, or in anticipation of that, Babel switching from the middle translation to the right translation. + +WH: Yes. + +MM: Great! Okay. + +USA: Awesome, next on the queue, oh—NRO? + +NRO: Yeah, there is still one case where Babel injects a new private field which is when there are no other static or private fields. So we inject one just to be in the correct code. We know we can do it because it transfers the code in the class body. But like, it might be ugly, but it is not observable. So it is fine. + +USA: All right. Finally, we have OFR. + +OFR: Yeah, just a quick question. If I understood this correctly. Basically your plan forward is looking into why the numbers increase and also looking into help for figuring this out. + +MM: Yeah, I’m looking in particular for help from Google. Oh, I should have mentioned this note over here. This note mentions an increase in July and December, 2018, which is before any of this was sampling. So my interpretation is, this note is not relevant to this graph, but because it mentioned July, I just wanted to double check that. That this note has no bearing on interpreting this graph? + +OFR: Yeah, I don’t know. That would be my interpretation of that. + +MM: Okay. So in particular, because these stats are coming from Google, I would like any help I can from Google understanding what the stats mean. + +OFR: Okay. Sounds good. + +MAH: The note at the bottom is static for any stat. They are not relevant in this case. + +MM: Okay. Great. + +USA: All right. That was it for the queue. + +MM: Okay. Okay! Great! So, I think it doesn’t sound like anybody’s objecting to this, you know, to Babel making the change in anticipation of this proposal. So as far as I know, the only thing that we need to do to ask—to ask for stage three next meeting is to have the test 62 tests. Since the entirety of the proposal is this, this shouldn’t be too burdensome. Does anyone know of—we took a look at the checklist, does anybody have any other thing they would want us to examine besides clarifying with Google the stats before we ask for stage three? + +USA: Nothing on the queue still. + +MM: Okay! Oh, go ahead. + +MAH (on queue):, go forward, I want to see this happen. End of message. + +MM: Okay. Great! I am done. + +### Speaker's Summary of Key Points + +* Recapped previous explanations (somewhat improved) +* New Google stats might indicate new concerns, but numbers still tiny +* Turns out, Google’s first (of two) reported breakages was due to Babel translation + * Showed Nicolo’s new future-proof Babel translation. + * Works full fidelity both before and after this proposal in std. + * Waldemar is happier with it over the status quo, even apart from this proposal. + * No one said they were less happy with it. + +### Conclusion + +To ask for Stage 3 next plenary, we need to + +* work with Google to understand what their new stats mean, hopefully not seeing new alarms +* write and submit new test262 test plan and tests +* get those approved and merged + +We asked, and no one raised any other things we need to do before asking for stage 3 + +## Immutable `ArrayBuffer` for stage 3 + +Presenter: Richard Gibson (RGN) + +* [proposal](https://github.com/tc39/proposal-immutable-arraybuffer) +* [slides](https://docs.google.com/presentation/d/18JnyoJsovfw7Y_HGa0cZOOUCHY2MTO4_M72zKj7PUWo/edit?usp=sharing) + +RGN: It looks like I will be able to give some time back in this presentation. The current status is that we’ve got test262 pretty much ready to go. The testing plan I went over in the previous meeting has been flushed out rather well. I opened a number of pull requests over the past couple of weeks to implement it, although the test262 reviewers haven’t yet approved them. I’m still very satisfied with the thorough scope of coverage and have uncovered nonconformances in every implementation tested so far. Which is primarily just XS. Although, SpiderMonkey just recently shared that it is in their nightly builds as well. + +RGN: So looking at our stage three progress, we’re down to just needing approvals on the tests. The testing plan itself, as I went over last time, is quite thorough. This slide is basically an overview of the full surface area represented by the proposal… existing `ArrayBuffer` prototype properties, and new ones, a lot of interaction with TypedArray, both static properties and prototype properties, including the new setFromBase64 and setFromHex methods that are specifically on Uint8Array. And also, interaction with the TypedArray internal methods because TypedArray backed by an immutable buffer has nonconfigurable and nonwritable index properties, so that needs to be represented with property descriptors, getting them and also setting them or defining them. DataView has a number of typed set methods, so that also applies as a TypeError when backed by an immutable `ArrayBuffer`. And finally, Atomics has some interaction, due to its mutating operations. All of that is covered in our testing plan, and in the new test262 pull requests. + +RGN: And basically, it looks to me like we’re ready for stage three! So, I am officially asking for consensus to advance. + +USA: First now is DLM. + +DLM: Yeah. Sure, SpiderMonkey team supports this for stage three. And other people inside of Mozilla that are quite excited to see this advance. + +USA: Great. Next we have NRO. Oh, sorry, NRO, I’m sorry. We had OFR first. + +OFR: Yes. So this is a bit late. But just an implementer’s note on the proposal that we noticed. I was wondering how it was with other engines, but we noticed that the proposal would probably lead to more detached `ArrayBuffer`s overall. And actually, detaching an array buffer is something that basically slows down `ArrayBuffer` access in V8, because there is a global flag whether anywhere ever there was an array buffer detached, as soon as you do that the first time, then access to `ArrayBuffer`s get slower because we have to check every time if the buffer is actually detached at this point. So this is something we noticed when reading the proposal this time. + +RGN: That is interesting. Do you have plans to address it or just are you just going to deal with the slowdown? + +OFR: So we actually don’t have a plan to address it. Because the problem is that the `ArrayBuffer` that you detach it from, like it is not changing shape. Nothing in itself changes. So, we don’t have a good way of tracking which `ArrayBuffer` would not be affected. So currently it is like one global property. We don’t have plans to change it. + +RGN: Okay. Sure. Just in terms of speculation, we anticipate that the most common pattern is going to be detaching an `ArrayBuffer` that is then never accessed again. And in fact, might not even have a binding associated with it, as with inline creation and transferToImmutable. + +USA: A reply by MM. + +MM: Yeah, I just want to make sure that I understand, right now an `ArrayBuffer` is sent over postMessage, it’s, with, you know, with the appropriate properties, say, please transfer this, it detaches the original. Does that set the same flag that causes all `ArrayBuffer` access to be slower from then on? + +OFR: I would assume so in that case, yeah. It is fairly uncommon for `ArrayBuffer`s to detach currently. + +MM: Is it uncommon to send an `ArrayBuffer` through postMessage? + +KG: Yes, it is just uncommon to have a worker at all. + +KM: Yeah, workers are pretty rare. + +MM: Yeah. That’s a good point. So I’ll just point out that in a later talk during this plenary is import type buffer and for those it magically creates an immutable `ArrayBuffer` without having to first create a mutable one and then detach it. So I don’t know how much, I mean, you know, the performance, the issue you’re talking about all has to do with what the typical case is. And I’m wondering if how well that would address the typical cases of concern if mostly these things are coming from—from, you know, resource data so to speak. + +USA: We have a reply by KG. + +KG: Yeah, I think we’re just going to see more buffers getting detached in the future. It is my hope, and the hope of some other people to make the easier to have workers in general. Right now, as I just observed, they are quite rare, but I think that is a lot to do with the ergonomics being awful. Like really truly awful. And there is a bunch of stuff in the works for making that better, including module sources and maybe shared structs. If we start making it easier for people to have workers, then inevitably they will transfer stuff to a worker more often and things are going to get detached more. So I think we should just prepare for the world where detaching buffers getting more common, and having accepted that, I don’t see any reason to worry about it for this proposal in particular. + +USA: We have a replay by KM. + +KM: Yeah. I don’t have any objection, but yeah, we have the same global watch point that OFR was mentioning. So just figured I would mention it, yeah, but I don’t think it is necessary blocking for us at this point. + +USA: Okay. Next we have ABO. + +ABO: I wanted to answer MM’s point. You can do two things when sending an `ArrayBuffer` over postMessage. You can send it—that copies the entirety of the buffer—and transfer it. It only gets detached when you transfer it. + +MM: Yeah, I was trying to being clear on that, I didn’t have the right terminology. Yes, thank you. + +USA: All right. + +JSL (on queue): Readable stream or writable stream implementations will detach `ArrayBuffer`s + +OFR (on queue): Google is nonblocking, that is just a comment for your consideration, end of message as well. + +NRO: Yeah. I—I—based on what we discussed. I’m happy to say all of the requests, enter them they look good. I think this is conditional on margined requests. But I think it is going to happen soon. We will make a come back to plenary for that. + +RGN: Yeah, I would be happy with that outcome. + +USA: Okay. + +JDH (on queue):, plus one for stage three conditional for test approval. + +KM: Yeah, I don’t think we have a problem with stage three, the feedback I got from our DOM folks is that we probably won’t ship until any kinks and everything else, and everything is worked out on the DOM integration side of this. I don’t know fully know what that means in hindsight I should have asked clarification before, but if there is something there from shipping, but we will probably implement the feature before then. + +RGN: I’m glad you brought that up. There is an HTML pull request with positive reviews and basically waiting for stage advancement on our side. That dovetails nicely with the web integration that you just alluded to, KM. + +KM: Great. Thanks. + +KG: There's the WebIDL issue as well. Which maybe this is a thing that we should bring to the committee’s attention more generally. The WebIDL for things which accept `ArrayBuffer`s or things backed by `ArrayBuffer`s, by default, if you just say that you accept an `ArrayBuffer` then it will reject any `ArrayBuffer` that has any unusual attribute. So for example, it will reject resizable `ArrayBuffer`s and you cannot use resizable `ArrayBuffer`s anywhere in the web platform pretty much because the default is that you don’t accept resizable `ArrayBuffer`s. That default would happen here as well. I would guess 95% of ArrayBuffer-taking APIs on the web are only reading it from the buffer and so could totally operate on an immutable `ArrayBuffer`, but the default in WebIDL is you don’t accept immutable `ArrayBuffer`s and no one will go through and update all of the web specs. That doesn’t happen by default, so the web default is that you cannot use immutable `ArrayBuffer`s with APIs. That will be the case for any new `ArrayBuffer`s that we introduce. + +KM: In that case it sounds like the people on our DOM side would not want us to ship that until that work happens. But I’m not 100% sure on that, I can say that now, but obviously, that doesn’t affect going to stage three. + +RGN: KG is there an existing issue for that in any of the linked repositories? Because if not, I will just add it to the repository for this proposal. + +KG: Yeah, there is one on WebIDL: [whatwg/WebIDL#1487](https://github.com/whatwg/WebIDL/issues/1487) From Anba. + +RGN: Great, we will take it on for the stage three work of this proposal. + +USA: MM, would you like to still speak to your point? + +MM: Yeah, I don’t understand KG’s point it would require change to all of the specs written, that are written in WebIDL rather than just a change to the WebIDL spec. So that immutable `ArrayBuffer`s all into the, if you accept an `ArrayBuffer`, then by default you accept an immutable `ArrayBuffer`. Why would that not be a blanket fix. + +KG: That is possible in principle, although, that would lead to the subset of APIs that do mutate the buffer being incoherent, they reach in and mutate it directly currently. The default from WebIDL is always been be conservative about these things precisely for that reason. They prefer the reject things that the spec has not been updated to handle rather than to accept those things and potentially between the coherent or insecurity. + +MM: So in the current spec language if someone says, they accept an array buffer and you either pass through a detached `ArrayBuffer` or pass it and then somehow immediately detach it. They would have the same incoherence, yes? + +KG: Detached buffers are a concept which WebIDL handles and has for a long time. If you say you take an `ArrayBuffer` and someone hands you a detached one, I’m pretty sure it just gets rejected at the WebIDL level. Or something. WebIDL specifically handles detached `ArrayBuffer`s somehow, that’s my main point. + +MM: Yeah. Thanks for raising this. RGN, do you have more thoughts on this? + +RGN: No, it looks like the existing discussions are capturing it pretty well. Actually, KG suggested a couple of weeks ago that a ForbidImmutable attribute would make the most sense and that corresponds with the blanket fix you were describing. + +MM: Okay. + +KG: Well, it is not a blanket fix. If you want introduce that attribute, you have to go through every single spec which uses an `ArrayBuffer` and finds the ones that mutate and update those, because they will be incoherent otherwise. + +RGN: Uh-huh. + +USA: All right. If you think that’s all, that’s all in the queue at the very least. Would you like to ask for stage three? + +RGN: I would. + +USA: All right. In that case, to be sure we are formally requesting consensus. So let’s give people a minute or two to add themselves to the queue for either support or—oh, okay. So, first of all, CDA is supporting stage three. And of comment, yeah, next up we have NRO. + +NRO: I said before, I, before—approving stage three, we need implementation to actually run the test to implement things. I’m happy to give consensus for this now, but conditional on the tests being leveraged. + +RGN: Yes. To amend my request. I’m asking for consensus on-stage three conditional upon the test262 PR’s merging. + +JHD: We already had myself, I think DLM, and NRO support that. + +USA: To clarify CDA, plus 262 for stage three. That is new, and adds to our graph with four and 1,000. All right. I hear consensus including conditional approval in the absence of any complaints or any blocks, concerns, you have stage three. Congratulations. + +RGN: Thank you. + +### Speaker's Summary of Key Points + +* The very thorough Immutable `ArrayBuffer` Testing plan has been translated into test262 PRs, which are pending approval and merge. +* There was implementer support, and also an observation that detaching any `ArrayBuffer` is currently rare and slows down all `ArrayBuffer` access in some implementations. +* There is also a need for WebIDL changes, without which web platform APIs will be unable to interact with immutable `ArrayBuffer`s (just like they are unable to interact with resizable `ArrayBuffer`s unless they specifically opt in). + +### Conclusion + +Immutable `ArrayBuffer` has Stage 3, conditional upon test262 PRs merging + +## Iterator Chunking for Stage 2.7 + +Presenter: Michael Ficarra (MF) + +* [proposal](https://github.com/tc39/proposal-iterator-chunking) +* [slides](https://docs.google.com/presentation/d/17qDtY-2Qawt7SeKoY7Rezea-A_hAuwhx2QHJ9MCZ7as) + +MF: I’m back with another iterator proposal, again one presented in the last meeting. So reminder for anyone that missed the last meeting. The problem that this proposal is trying to solve is consuming an iterator in two different ways, either as overlapping subsequences or as non-overlapping subsequences, and the size is passed into the APIs. So the solution that we’re working with for consuming non-overlapping subsequences is called chunks. It is `Iterator.prototype.chunks`, it is passed a chunk size, you can see an example. If we apply it to an iterator of these nine elements, with a size of three, it breaks it into three chunks of size three, and yields three different arrays. + +MF: The solution we have for consuming overlapping subsequences is called windows. It is passed a window size, kind of like a chunk size, but instead of yielding the non-overlapping subsequences, it yields, the subsequences are only offset by one. So on this similar iterator of eight digits it will yield five arrays of size four. + +MF: And context for the rest of this discussion, consider a case where instead of nine digits yielded by the underlying iterator, we yield 10, and still pass `chunks` a parameter of three. We will yield four arrays. We will yield the first these of size three like we requested and then the one remaining element will be yielded with a length of one. That’s how chunks works when the number of elements yielded by the underlying iterator is not evenly divisible by the chunk size. + +MF: Okay. So, last time we talked about two possible behaviors for `windows` that we were considering and this is the behavior when the window size is larger than the total number of elements yielded by the underlying iterator, and when the underlying iterator yields at least one element. And we concluded that there were significant use cases that we cared about for at least two of the possible, I think, four provided options. So we decided to go with those two. And as of last meeting, we decided to add a new method that has that similar behavior to `windows`, but with the difference in how it handles an iterator not filling a single window. We did not decide on a name for it at that point, but I did request that if anyone had feedback, they put it on the issue tracker. + +MF: So, what I went with is this method called `sliding`, which has that difference in the iterator. The name comes from Scala, which has this same method, with the same behavior. You can see they have, they have a method called `grouped`, which is our `chunks`, and they have this other `sliding` with a parameterizable step size that we decided not to do early on in this proposal. So they have this method called `sliding`. So that’s where I took the name from. + +MF: And remember, this difference is only when the underlying iterator yields at least one thing. When the underlying iterator does not yield anything, we do not yield an undersized window of zero length. + +MF: So I have written up the spec text. It is available in the repo. It has been available for over a month, a month and a half or so. I did it like right after the last meeting. You can see that all three of these methods are very similar. They are very easy to review. They only differ here in the parts I’ve highlighted in yellow. So if you haven’t reviewed it yet, well, I guess, it is too late :) + +MF: I’ve also gotten reviews from the assigned reviewers. So, I have, I have reviews from ACE, JHD, and JMN, thank you for that. But I did also receive some very recent feedback I want to go over from NRO and KG. KG provided feedback a couple days ago, asking that instead of providing two methods for `windows` and `sliding`, we combine them as one windows method and take a parameter, like a string parameter that switches between the two behaviors we agreed that we wanted to include in the proposal. And NRO opened up this other issue just to bikeshed names. + +MF: So I am personally open to continue to work on this proposal. I think that it’s fine to go forward with the two methods as I have presented today. And I think it is also fine to consider different names or a combined method. I just want committee agreement on any of those directions; I think they all solve the problem just as well. + +MF: I have my summary here, I don’t know if this is just be for the notes or if you also wanted me to go over it. But these are basically the things I said already. + +MF: I would be fine with considering different names, I would be fine with combining the methods. I would just need a definite direction from the committee that I can go and kind of have confidence that, you know, when I come back with the solution as we agreed upon, that we can continue moving this proposal forward. That’s my presentation. I would love to hear any feedback on the queue. + +JRL: First up we have KG. + +KG: So, sorry I didn’t give this feedback in a more timely fashion. I blame not being awake during the presentation. But I should have given feedback sooner. Sorry about that. But I really do think having two methods here isn’t the right approach. These methods are really, genuinely, almost identical. For almost all users, there will be literally no distinction. I think it will be quite unusual to be providing a nonempty iterator that is smaller than the window size. That’s just like a pretty unusual case. So for almost all users, there’s literally no difference between the methods, but you still have to choose between them. And there’s not really anything to tell you how to choose between them. If one code base ends up choosing to use one and another uses another, no one to say one of those is right and the other one is wrong. They both work exactly the same. I think that is a really bad situation to find ourselves in. + +KG: So I strongly prefer to have a default behavior and to specify the other behavior with the string parameter. I’m not super bothered what that behavior is, but I think an extra parameter here is just a much better experience for developers than two methods. Especially two methods with such similar names, but I think even with other names, they are just like the fact that almost all use cases don’t care which one they use, means that we shouldn’t have both. + +JRL: Steve has a reply. + +SHS: Yeah, strong plus one to what KG is saying. It reminds me a lot of how `substring` and `substr` have a relevant difference, but I couldn’t figure out which is which. There is nothing about window or sliding that will cue in anybody as a mnemonic to know what it is. I’m definitely in favor of one method with a parameter. + +JRL: And NRO? + +NRO: Yeah, same what SHS just said. Both do something that is code, having a sliding window of the iterator and just pick the two words. Yes, there is sliding in scala, but Scala only has `sliding` and not `windows`. In the issue I had suggestions. So I don’t care whether we do one method with a string option or two methods. But if we do two methods at least one of them should have a name that tells me how it is different from the other. I give some suggestions in the issue, but like, for example, something that maybe skewed for me to have `windows` and `windowsOrShorter` and stuff like that. Where we say, okay. `windows` is the one that gives me all of the chance of picking a size, and then `windowsOrShorter` in this use case, are shorter. I get a bunch of suggestions, but I think this one was the best. It is fine if it is a long name. Because people will start adding `windows` and complete so they can think, okay. Do I need the ones that is allowed to give me the shorter thing or not? + +JRL: We had another reply. I’m sorry, not reply, a new topic from WH, but you deleted yourself. + +WH: Yeah, I already figured out the answer. I had wondered why you’d want to emit nothing if you have some input, but not enough input to fill a window. I convinced myself a use case for that is if you want to do adjacent differences. + +JRL: Okay. So, next we have SHS again. + +SHS: Yeah, you can make one out of the other, but not a direction. The natural thing to pick a default is the one that omits something, at that point, maybe you don’t need to have it built-in, you can have a separate userland perspective to build around and undersized element. Just throwing that out there. + +NRO: Yeah, sorry, we talked about this last time already. But there is already a built-in, which is that, I think one behavior and the built-in does the wrong thing, I’m unlikely not to notice it because it is very much an edge case and then just fail in production because I forgot to think about the case. It is good to have it built-in so I actually have to think about which one we use. And again, the issue, it is good to have both of them starting with the same words so when I start typing I see both of them and I can start to think about it. + +MF: Yeah, that’s a good point. + +JRL: Okay. Now, actually, JHD. + +JHD: Yeah. I mean, NRO’s points are solid. I still do prefer a parameter here. I don’t think that either of these methods is so intuitive that people are going to be trying to use it without looking up some documentation. So that documentation will either talk about both methods or talk about both parameter values. And autocomplete will show two methods or it will show the parameter. And so I don’t think it makes a difference really which form it is. And given that I prefer it being one method with a parameter, if we do that, I would like to see it in an options bag. I would like to see us use options bag more generally so we don’t have to keep taking on arguments and worrying about, you know, web compat and things like that. + +JRL: NRO has a reply. + +NRO: Yeah. I agree autocomplete is good if it was a single method, since usually editors will show the list of expected parameters. + +ZTZ (on queue): says people are going to get them from copilot, two methods will make an alternative harder to find (EOM). + +JRL: MF? + +MF: Yeah. I, so I was saying earlier that I’m pretty neutral on either of the solutions, as long as we make the things we wanted to do possible and we’re solving the problems, except for the fact I don’t think that we should have an options bag for one option. At least not when there’s one option and there’s no plans to add more options in the future. And the reason JHD gave for the options bag is future proofing, but as long as we’re not doing coercion, because it is a string and we don’t coerce to string anymore, we can still future-proof it and still in the future accept an options bag or string in this position. + +JHD: As a reply to that, we certainly can. That is a typical approach in userland to try to correct the mistake of putting a non-options-bag parameter and then realizing, oops, we should have put an options bag in the first place. That is meant for backwards compatible, that is not a good API design or preferred outcome. You’re correct that path would remain open to us, I don’t think that would be a good outcome. In a world where we have two options, I don’t think that would actually be a better API then having an options bag with one options, even if we never add a second option. + +JRL: SFC has a reply on the same lines. + +SFC: In Temporal we have examples of a function with a single-option options bag. I’m pretty sure I can find more if I look through 402. I don’t see anything necessarily wrong with that. The options bag is basically named arguments that are not like the core argument that’s required. So I don’t see anything wrong with that design. + +JRL: And NRO? + +NRO: Yeah. I think there’s a difference API that are similar to other APIs that already take a large options bag, and for the similarity we have an option, a single option. And APIs where there is nothing similar to this method with this type of option. It’s, like the trade-off, like—how like—well, okay. Exactly what SHS is going to say, so SHS, you can say it. But maybe you can have a temperature check here. + +SHS: Yeah, the fact that we have a reasonable way out, even if we made the wrong choice here. Really means the trade-off and consideration, how bad is it and out likely is it. I think it is unlikely, not impossible, we will add a second option. If we do, we have a way out. It is not pretty, but the trade-offs to me, says we should probably stick with a string parameter for now. + +JRL: And Kevin? + +KG: Yeah. I agree that I prefer not to have an options bag unless we actually think there is going to be more than one option here. + +JRL: Okay. NRO called for a temperature check. But given the responses here, do we still think that is necessary? + +MF: There is a bit of a meta discussion that I don’t think we need to resolve right here. Especially if we are not even decided down the multiple methods, choose different names route or the single method route. I think that it's more important during this time-boxed agenda item to figure that out first. + +JRL: Okay. Yep, then NRO, you beat me to it. Let’s get consensus on changing to one method or sticking with two methods first. Is there any opposition to using a single method with some form of parameter? + +KG: I support. + +NRO: Would anybody here other than me voicing for having the two methods. I personally would be happy with just a single method. + +WH: I prefer two methods. But it is a very weak preference. I’m fine with either way. + +JRL: Okay. Are there any other opinions? + +SFC: Yeah, I put myself in the queue. I think if you did have the two methods with the same prefix. If the problem we’re trying to solve is autocomplete and such, having two methods where you might have like `windows` and then like `windowsSliding` camel case, would seem to be more descriptive of what’s actually going on there without actually introduce a new name space. But overall, I don’t have that strong of an opinion. + +JRL: Okay. So we have weak support, I think, in both favors, with a single method with parameter and keeping two options. Sorry. I know, I’m doing the moderating right now, I actually prefer having a single method. One because naming things is really difficult, if we have two methods we have to decide which one is going to do what. I would like a single parameter. + +KG: Sounds like support for a single parameter was a lot stronger to me. I feel pretty strongly there should not be two methods. We had some weak support for two methods, but several people strongly prefer one. We could do a temperature check, I suppose, but unless someone strongly prefers two methods, it sounded to me like one was the preference. + +JRL: Shane? + +SFC: A point, I think it was NRO that raised it, I didn’t really hear anyone else talk about that, but it seemed like a reasonable point, which was the idea that making developers think about how to handle the edge case of an array that is too short. It seems like that is something that is useful to raise. I think one way to solve that problem would be like looking at the use cases for one versus the other, which it sounds like MF did some research on that—researching for the use cases of one versus the other, what would happen if the developer chose the wrong option. How bad would the outcome be? + +MF: We could go back through the use cases that we covered—I don’t have them handy right now—and do that evaluation. + +JRL: So we have just under 10 minutes left. We can continue trying to pick a name, which might not be successful. So we can decide to stick with two methods, in which case we need to bikeshed the naming of the two methods. Or decide on a single method and bikeshed if we are going to use an options bag or a string parameter. Which one do we want to go with here? + +MF: I think the responsibility lies on me now as champion. I heard the feedback from the committee. It sounds like nobody is very strongly opposed in either direction. And it sounds like a slight majority of voices have spoken for the single parameter with an option, I would ask for the committee to approve that path. + +JRL: Okay. Are there any objections to doing a single method with a parameter? + +MF: We do have an item from NRO that might be relevant. Do we want to cover that first? + +NRO: If we cannot bikeshed between the options bag versus the string, well the two methods solution doesn’t require that bikeshedding. + +JRL: Yeah. + +(?): Then we’re going to bikeshed the names. + +JHD: I have a reply about that. Two things, one is, we probably shouldn’t be designing APIs based on how little discussion we want to have. But separately, it was suggested in matrix by JRL, actually, that if we make the parameter required, that forces developer to think about what they want. That forces it to be a string.. And not in an options bag in my opinion, for ergonomics and usually options bags work better for optional arguments and then that sort of, there really isn’t any bikeshedding yes, there is just two enum values or something. Two string values, what are people’s thoughts about that, the ergonomics of that versus having to be optioning and picking one default. The ergonomics are better if you want that default behavior, but it could be more confusing if you don’t. What are people’s thoughts there? + +JRL: WH has a reply. + +WH: This creates more bikeshedding about what the values of the parameter are. My preference is for it to remain true or false—make it a Boolean. + +JHD: Sure, it seems like we are going to bikeshed regardless. It is more of a question, can we address everyone else’s concerns, so “oh, no, we have to bikeshed” is the only thing left. + +WH: I think we should just come up with two or three complete proposals and not try to do it one step at a time. + +JRL: Okay. + +WH: If we do two methods, one viable proposal might be to call them `windows` and `windowsWithRemainder`. + +JRL: Okay. MF, do you still want to continue? + +MF: Yeah. I think I’m still asking for the approval from the committee to go forward with a single method with an option. And if we have time, I would like to figure out what, I guess I would like to see if there’s opposition to doing a string parameter. Basically exactly what KG has in pull request #24. + +JRL: Okay. So, is there objection to using a single method with a string parameter. + +WH: Depends on what the string parameter values are. + +MF: Exactly what Kevin has in #24. So “discard” or “truncate” for the two behaviors. + +MM: Optional or required parameter? + +MF: In the pull request it’s optional. KG can you speak to the default? + +KG: I semi-arbitrarily made the default `”discard”`, but actually having heard, whoever it was, SHS’s point about you can build, you can build discard out of truncate, but not truncate out of discard, my inclination would be to make `”truncate”` the default I guess. I don’t have a strong opinion there. + +WH: “truncate” implies that it returns nothing if you give it a partial window. “discard” implies that it returns nothing if you give it a partial window. They seem to mean the same thing. + +KG: Okay. Truncates the window. + +WH: This is too confusing. Because I would think that `”truncate”` truncates the input. + +KG: Neither of them truncate the input. + +WH: I meant output. Yeah, this is too confusing. + +JRL: Okay. So we have a point of order. Actually, there are three minutes remaining now. So, we do need to move on at some point. There is replies now for, I’m sorry, next speaker. MAH. + +MAH: Yeah, if it is a required string parameter, I would rather have two methods. it is basically write it in one place or another, I would rather write it in method then a string. + +JRL: Okay. And Jordan. + +JHD: Yeah, so looking at that PR, there are three choices. Right there is `”discard”`, what was it? `”discard”`, `”truncate”` or `”throw”`. So why don’t we make the default `”throw”`? And then it is optional unless you have too small of a window. + +MF: That was not included in the pull request. It was mentioned in the pull request as something we can add in the future. We discussed `”throw”` at the previous meeting. + +JHD: Oh, I’m looking at 25, you’re talking about 24. My mistake. + +MF: Yeah, we decided not to include `”throw”`. I made the argument that it is an antipattern and we should not include it. + +JHD: Okay. + +JRL: Okay. So now we have two minutes remaining. I think MF is still leaning towards a single method with a parameter name, string parameter that will be bikeshedded because we’re a committee. Is that still the preference? + +MF: Yeah, given that we’re at the end of time box, I will—I will do the proposal with—I guess, with exactly like what KG’s pull request is right now. I know there are some that had issues with that exact state of things. Please participate in discussion in the issue tracker. Hopefully, we can resolve all of the issues before the next meeting and I can come back with something somewhat like what KG is proposing. + +KG: WH, my original suggestion for names were `”empty”` or `”short”`. The output that you get, you don’t get one or you get a short one. I'm open to suggestions for names. I’m not strongly attached to any particular names. But if you have names that you like, I would take suggestions. I did want to say, I really don’t like the `remainder` name that you suggested, because that sounds to me like it is talking about the end of the input, even in the cases where the input is larger than the window size. Here we are really only dealing with inputs that are shorter than the window size and what we get in that case. So I don’t think of it as a reminder at all. + +WH: We should be able to pick a good name pair. Given what you said, I’m partial to `”discard”` and `”short”`. + +JRL: Okay. MF, did you want to quickly, I think you already recapped, but do you mind doing that one more time? + +### Speaker's Summary of Key Points + +MF: Yeah. We discussed our options for providing the two similar, but different, behaviors for how to handle large window sizes and small underlying iterators. + +* recapped problem space +* as discussed last plenary, added new windows-like method that: + * yields an undersized window when underlying iterator cannot fill first window + * except yields nothing on empty underlying iterator + * is named "sliding" after Scala's equivalent method + * is otherwise the same as Iterator.prototype.windows +* spec text approved by 3 assigned reviewers +* recent feedback from KG and NRO with alternative API designs + * open to alternative designs but would have appreciated earlier feedback + * also fine with going to 2.7 as-is + +### Conclusion + +MF: It sounds like we are leaning as a committee towards a single method with an option of some sort. Undecided yet on what that option is. Hopefully, we can resolve this via the issue tracker in the next two months. I expect to see lots and lots of activity on the issue tracker in the near future with everybody’s concerns. And then, I can update the proposal to include our resolutions and bring it back for stage 2.7 at the next meeting. + +## [Keep trailing zeros in `Intl.NumberFormat` and Intl.PluralRules](https://github.com/tc39/proposal-intl-keep-trailing-zeros) for Stage 2 or 2.7 + +Presenter: Eemeli Aro (EAO) + +* [proposal](https://github.com/tc39/proposal-intl-keep-trailing-zeros) +* [slides](https://docs.google.com/presentation/d/1hKJFrDfiGeqPWm51fQFQb4M4CeYm3ultB7Opef1BVuE/edit?usp=sharing) + +EAO: Presenting this hopefully for stage two, possibly 2.7 if we reach that far. But yeah. As, so first to recap what we’re talking about. It is really a bug fix in how `Intl.NumberFormat` and `Intl.PluralRules` treat digit string values when given. So right now, when we construct a `NumberFormat` and ask it to format the string `”1.0”` it outputs by default just the number one. And similarly, when we are asking a plural rules instance to select on 1.0, the string outcome is category `”one”`. So we are discarding the, the trailing zeros here. And the whole idea here is to instead of discarding those, we retain that information and to do so, we change some of the internals of `Intl.NumberFormat` and `Intl.Plural` rules so that this happens. + +EAO: It is important to note this is very limited in scope in what this would affect. So when you’re trying to format or select on a number or when you’re trying to format a Bigint, everything would stay the same, and also when you’re using options that affect the display of fraction digits or significant digits all of that would just work just as it does right now. So an example of what we’re hoping to achieve with this proposal is, if you construct a `NumberFormat` and you ask it to give it a minimum fraction digits of `’1’`, and then if you were to format the digit string one with that, you would get `”1.0”`. But if you were to ask it to format a digit string with more digits, even if those are trailing zeros, like `”1.00”`, you would have those be included up to the maximum fraction digit which by default is three. + +EAO: So this was at the last meeting presented and accepted for stage one. And now, I’m going to continue here to present the work done since then, which is to effectively write the spec text and hopefully get acceptance for having this be approved. + +EAO: So as you see here, from effectively the diff, there is no change to the external interfaces here in any way. All of these are Intls effectively. And the change starts from when we are parsing and understanding the meaning of a string numeric literal. This is how in the spec in 262 we define what a numeric literal looks like when we’re parsing input—sorry, parsing syntax even. And we change that definition to keep a count of the number of decimal digits in the source text that were there. This is then used in the construction of, we have this Intl mathematical value, which currently is an extension of a mathematical value as defined in 262. And Intl mathematical value currently can either have this finite mathematical value, or it can be one of these specific identifiers for positive or negative infinity, not a number, or negative zero. But going forward, what we would be doing here is changing that definition that’s internal to `Intl.NumberFormat` effectively to be a record that holds this mathematical value or one of these specific flag values or, and then, a—a count of the number of significant digits in the value. But only if it is parsed from a string. Otherwise, it would be zero. + +EAO: And in the this ends up being passed through a number of functions, and it ends up having an impact in, when we are actually generating the string based on which the number gets formatted or selected on. This happens in `ToRawPrecision` is what we use when we are effectively working with significant digits and `ToRawFixed` is what we use when we are dealing with fraction digits and the change here, so—both of these work in the same sort of way. First we format a number with, with the maximum fraction digits or maximum significant digits number of digits in the, in the whole string. + +EAO: And then if the last digit is zero, we start removing that until we hit a digit that is not zero, or we hit the minimum precision or the minimum fraction digits length of that. And here we would effectively stop the cutting earlier in case the input value has more precision that we identified. This does what we want. + +EAO: So in addition to the spec text, I’ve also implemented a effectively a polyfill, but I have not published it in npm. But a format of JS, informal polyfill just to validate that work, that this spec change works as intended. And the only really open question here is that it’s plausible that someone could come up with a reason why we would want to keep the current behavior where, when we’re formatting a string digit value, that we want to forget the trailing zeros there. And then, if somebody comes up with a reason why we would want to make it possible to keep the current behavior, then the obvious place to make that change would be to add a third option to the existing trailing zero display option, where `”auto”` is effectively the current behavior which this proposal would be changing. And then, if somebody wants the current auto behavior there would be a new option `“stripFromString”` to do that. + +EAO: This would also be the option that in the case that it turns out that the change over all here is not web compatible, which would be very, very surprising. If that’s the case then `trailingZeroDisplay` would also be the place where we would want to add a third different option to trigger this behavior that we presented here. But my expectation and hope is that we don’t need to touch `trailingZeroDisplay` at all. But this is what we would touch in case we can’t apply the bug fix here pretty much directly. + +EAO: So with that as effectively, the state of where we are, I’m coming to you here to ask, can I have stage two. or are there any questions or concerns around this that ought to be addressed? I’m happy to go on the queue. I don’t see anything there at the moment. + +WH: I see this has the same issue in *ToIntlMathematicalValue* as the next proposal does. So I just want to flag it and we can discuss it in the next presentation. + +EAO: You want to flag it, but discuss it only in the Amount presentation? + +WH: Yes. + +EAO: Okay. RGN? + +RGN: I finished my review earlier today of the proposal as it stands. I think it is going to be good to keep the precision. I support stage 2. And I’m not particularly concerned about maintaining the current behavior for rounding. But if it does prove necessary for compatibility, then I think the plan you outlined is a good one. + +EAO: Cool. Shane? + +SFC: Yeah. Just bringing another plus one for advancing this proposal. I think it’s an important bug fix. And I think that it builds on the foundation that we laid with the accepting strings in `NumberFormat.prototype.format` to retain extra precision. So I support. + +EAO: Cool. + +JRL: We also have support from DLM for stage two. And NRO, I think, also for stage two. + +NRO: Yep, actually, I would support 2.7. But, yes. + +EAO: I have to get two first. So— + +JRL: Yeah. So let’s call for consensus on stage two. We have gotten explicit support from SFC, NRO, RGN and DLM. Does anyone object to stage two? Nope. Perfect. All right. Do you want to go for 2.7? + +EAO: Before that I need reviewers for 2.7. + +JRL: Okay. Perfect, yes. RGN, you said you had already reviewed? + +RGN: Yeah. So I’m willing to continue in that role. + +SFC: I’m also willing to continue in that role. I’ve been giving feedback since (unintelligible) + +JRL: So we have RGN, SFC, and NRO on the queue. + +NRO: If RGN and SFC already reviewed, I don’t need to review. But maybe I can review this tomorrow morning, like if we have enough reviewers I’m happy to not do it given I have not looked at the spec text yet. + +JRL: I should clarify then, RGN and SFC have you already reviewed the current spec text. + +RGN: I have, yes. + +SFC: I reviewed earlier drafts of it. But I haven’t reviewed the current draft completely. + +JRL: Okay. If do we want to add a time box item for tomorrow so we can give time for the official reviewers to stamp it? + +EAO: If the ask is on me, yes, I would like that. But—yeah. There is a queue item from CDA that might be relevant here. + +CDA: So to advance to 2.7, you need the assigned reviewers to sign-off on the current spec text and the editors’ review to sign-off on the current spec text. So do you want to allow some time for that and come back for 2.7? + +EAO: I’m happy to go with whatever makes this work. If that means I get 2.7 at the next meeting, that’s not horrible. But if this gets 2.7 at this meeting, cool. + +CDA: It is not unprecedented. But we would need those completed, you know, and then you would have to come back. To do it this plenary, those reviews would need to be completed before the end of plenary and then you’d need to come back and formally request 2.7. I see SFC’s reply: RGN is an editor. Yes, absolutely. But the requirement is that the editor group has signed off. So, or am I mistaken, is someone from the editor group not missing? So we have RGN, USA— + +SFC: I believe RGN and USA is the extent of the editor group, because BAN is on leave. + +RGN: That is correct. + +CDA: Has USA reviewed this? + +SFC: Typically with 402 proposals we have been okay with a single editor review in the past. But if that’s wrong then like— + +CDA: I’m certainly not trying to challenge the precedent if that is what you folks have been doing. That’s fine. + +RGN: We can consider this as approved by the ECMA-402 editor group. + +JRL: Okay. Perfect. So all we’re missing here is a second reviewer, a non-editor reviewer to approve. SFC, who has volunteered, and possibly NRO. So, as the earliest if SFC is willing to do it tonight or tomorrow, we can set a time box later in the meeting and ask for a formal 2.7 later at a time point. + +NRO: WH has a concrete blocker here. + +JRL: Stage three next meeting is a potential. And WH. + +SFC: I had a question about stage three next meeting. If we do stage two now and we write the tests then can we get process-wise stage three next meeting, even if we don’t get stage 2.7 today? Like process-wise, can we skip 2.7 if we have the tests written. + +JRL: As long as test are written satisfactory for test262 authors, I think we can go from two to three. + +CDA: Right, as long as you met all of the entrance criteria and no blocking from folks there is nothing to prevent you from doing that. + +JRL: WH has a concern here. + +WH: Yes I have a concern here with *ToIntlMathematicalValue*, which has undesirable consequences, I’m fine with this proposal going to stage 2, but not stage 2.7 in its current form. + +JRL: Okay. Given that we can’t go for 2.7 right now anyway, and we’re going to discuss the next proposal as soon as this is done, we might be able to clear that up and still come back at the end of this meeting. + +EAO: WH, can you clarify what your concern with *ToIntlMathematicalValue* is with respect to the changes done by this proposal? + +WH: We’ll discuss this in the next presentation. I don’t want to do this twice. + +EAO: Okay. + +JRL: Shane has a reply. + +SFC: I mean, I feel like now is a perfectly fine time to discuss it. I read WH’s feedback on the other proposal I think I know what he is suggesting. I think that my read is that there’s not a problem. But I think maybe we should clarify that. So we can talk about that now or later. But now seems like a perfectly good time to do it. + +WH: There is a problem, but right now we’re missing the context which is needed to discuss it. + +JRL: Okay. Then I think WH is objecting to 2.7. We need to move on. We got formal support for stage 2, and then go into the next proposal, the amounts proposal and discuss WH’s concern in that. + +EAO: Okay. So, one thing for the key zeros thing, I would like to ask for, I don’t know, a five or 10 minute continuation for tomorrow or whenever for the potential advancement to 2.7, if presumably we can resolve the issues that are identified with *ToIntlMathematicalValue* here. + +JRL: Okay. I think we have 15 minute overflow tomorrow morning and day four which is basically wide open. + +EAO: Cool. Let’s see. + +### Speaker's Summary of Key Points + +The proposal is a bugfix for `Intl.NumberFormat` and `Intl.PluralRules`, which allows trailing zeros in digit strings to be retained. It does not change any public APIs, only the internal behaviour of the number formatter for that specific case. If the current buggy behaviour is shown to have some utility, or if the change proves to be web-incompatible, an option value will be added to the existing trailingZeroDisplay option to address the issue. + +WH raised an issue within the *ToIntlMathematicalValue* abstract method, which was identified as being outside the scope of this proposal, and will be addressed separately. + +### Conclusion + +The proposal and its specification was presented, and was accepted for Stage 2. RGN and SFC volunteered to act as its Stage 2.7 reviewers, and completed their reviews during the meeting. In the continuation on Day Three, the proposal was accepted for Stage 2.7 + +## [~~Measure~~Amount](https://github.com/tc39/proposal-measure) for Stage 2 + +Presenter: Eemeli Aro (EAO) + +* [proposal](https://github.com/tc39/proposal-measure) +* [slides](https://docs.google.com/presentation/d/1my6X1ODDckzJmtcWcFI9hRF_I06Z4RQwrq81lbo8wPM/edit?usp=sharing) + +EAO: So, yeah. Hi, again, still. I continue to be EAO. JMN from Igalia might be jumping in at some point here, as we put this together together. The idea with what we are presenting here is to hopefully propose for stage two advancement a reduced set of what was previously accepted for stage one as the “Measure” proposal. And as a part of this, we would explicitly like to rename this proposal as the “Amount” proposal, given that is the name of the thing we are looking to add. It would be very confusing to keep the Measure name but the Amount object given that so much is what do we call this thing and what do we call the related aspects? So, yes. + +EAO: Originally when this got stage one in October last year, in addition to what we are still including here, the motivation included and the use cases included compound units such as introducing support explicitly for foot-and-inch and other similar formatted units that are formed from more than one thing that is being indicated. And then, also, it included unit conversion between, within measurement systems like for meters to kilometers, but also across where that is supported governing from, for example, kilometers to miles and so on. And rather than advancing then part of amount, we would prefer to leave these to be part of the smart units proposal that has been around for a while. And potentially an upcoming TG2 or Intl proposal for the expanded unit support in particular. + +EAO: Then at the last TC meeting we discussed this topic quite a bit. And how to kind of formulate amount on, and the value that it is representing. The conversation from there has continued in the recurring TC JavaScript numerics calls as long as the matrix channel we’re using, and throughout those conversations, we’ve ended up with what we are proposal here as the way to go forward. Where effectively we have an amount which is holding a mathematical value that can be timed with a unit identifier, as well as a precision indicator in terms of fractional significance. And this we believe is an important thing to include in the language as it brings in a new capabilities of representing first of all a number with a precision which is currently not really possible, but in particular, when you have trailing zeros. And it is also bringing in the possibility of representing numerical values that go beyond the precision capabilities of Number or potentially even Decimal. And Bigint as well, because Bigint is restricted to integer values. + +EAO: This is important in particular to be defined within the spec. So that we can make sure that it works really well with the existing `Intl.NumberFormat` and `Intl.PluralRules` capabilities that we have. Because we, it becomes really useful and important to be able to pass an amount the thing we are formatting or the thing that we are selecting the plural case of. And that requires the precision and the unit to be known. + +EAO: But then, the use cases here do extend beyond the Intl spec, in that we enable from this the representation of mathematical values, numerical values that are coming from integers with other systems and within JavaScript libraries that are interested in precision beyond what Number, for instance, supports. + +EAO: And there’s a bunch of examples of this existing in other languages or systems or standard libraries of other languages. And there’s a whole bunch of other libraries that end up bringing in support for amounts or quantities or units or measures or so on. There is also [a proposal](https://github.com/mozilla/explainers/blob/main/amount.md) for adding an `` element into HTML. And then, one of the most relevant prior arts that is currently available actually in browsers is `CSSNumericValue`s. Where you can even write out (this doesn’t work in Firefox, but it does work in Safari and Chrome), where you can run all of this code that you see on the screen. Where `CSSNumericValue.parse()` will parse a string like it was a CSS length or other unit indicator. There’s a couple of ways of constructing these, including these CSS `khz`, for kilohertz, or `px` for pixels or `cm` for centimeters. All of the units supported throughout providing these CSS numeric values and they also support multiplication, addiction, subtraction, division. Also, conversion in fact. So you can go like on the last line from pixels to inches and get the value out of here. This does differ in quite a few ways from what we are proposing for Amount. For instance, the value here is always a Number. And the units are the CSS supported units, it’s an explicit set. + +EAO: So what we are proposing here is an amount that can be, an instance of which can be constructed out of a Number or a BigInt or a string and possibly given a bag of options in that constructor. Then in order to, because the Amount is immutable, to create a new value there is a new utility method `.with` that takes in the same options bag and can, with that you can create a new amount. And it provides the accessors for the unit identifier, if any is given. + +EAO: And then because the inner value in an amount is a mathematical value that doesn’t exist as a JavaScript type, to get at that value we have `toBigInt()`, `toNumber()`, `toString()`, as the ways of getting that value out of there. And then `.toLocaleString()` works the same way as the `.toLocaleString()` as number of JavaScript. And then the `Symbol.toPrimitive`, I will show later some of the details of how we envision that to be working. And then going back to the constructor and its bag of options, it allows you to define the unit optionally one of fractional digits or significant digits. If you get both it is an error. But with one of those you can define the—not just what precision the value has as given, but externally impose a precision effectively. Because this can require rounding to half, we also allow for a rounding mode option here. + +EAO: The values, option values for rounding mode, those are taken from the Ecma 402 spec, which already supports effectively rounding mode like this in `NumberFormat`. Also, notice that when stringifying this, there’s a question of what do you do, how do you stringify an Amount that does have a unit? When we are not using `toLocaleString` to produce something for human consumption, but for potentially machine consumption. And here `displayUnit` which defaults to `'auto'` triggers that. I’ll show you effectively how that works. + +EAO: So when calling the code here, we have an example, `a` is an Amount constructed, it is a number value that internally becomes—we do the same thing with numbers here as is done with, in—in `Intl.NumberFormat` effectively. So the value `123.456` We first serialize that to a string and then we parse the decimal numerical value out of that. So, we end up with the value internally being 123.456 exactly. With for fraction digits which is why it shows up there on this line where we use `a.toString()` and the kilogram unit here is included. But if you want to get just the numerical value out of it with `{unitDisplay: 'never'}`, that is then left out. + +EAO: And with the, with the width method, you can kind of take an existing amount A in this case and create a new Amount with an updated options bag effectively. Here we are reducing the fraction digits count from four to two. And therefore, it is `toString` and `toNumberValues`. Match that precision. And for internationalization, these end up relying on the capabilities of `Intl.NumberFormat` and `Intl.PluralRules` where `.toLocaleString()` ends up just kind of working. But then also we are in this proposal looking to change how `Intl.NumberFormat` in particular behaves when its constructed with `{style: 'currency'}` or `{style: 'unit'}`. + +EAO: Right now, that line constructing a new `NumberFormat` would throw because we require the, with style currency, we require a currency value to be set in the constructor options. And for `{style: “unit”}` we, of course, require the unit option to be given. But with this proposal we would be not throwing at that point, but throwing when formatting, if the thing to be formatted with the style currency formatter doesn’t get a currency indicator for instance, at some point. + +EAO: And as mentioned, the—the limits of Amount are currently envisioned to be the same kind of way as being arbitrarily large as Bigint is, where you could construct an `Amount`, the example here is `1.0E999`. Where we can tell that the stringified form of this, we can represent that pretty much directly in the canonical form there. But when converting this to a number, we get infinity, and similarly, when we go to `toLocaleString` we end up with the infinity symbol because we go beyond `Number.MAX_VALUE`. + +EAO: And yeah. This is about the hinting. So effectively, if you have an amount constructed without a unit, then the stringifying it and casting it to a number really just works effectively as you would expect. This does have the effect that you can add two `Amount`s together, and these add up as numbers and give you a number result. There is a detail, I think we need to be careful with the `BigInt` constructor because that one currently is casting its input to a Number before it reads it as a BigInt, we would need to do a little bit of work there. + +EAO: But note that when the Amount does have a unit, then well stringifying it works exactly as `.toString` does, calling `Number(d)` there, would end up with a `RangeError`. Just to be sure that you don’t actually do, accidentally things that are effectively mistakes. So you don’t add one meter to two kilograms, for instance. This does, yes, prevent the ability to add one meter to two meters, which might be nice, but because that requires operator overloading, this is not envisioned as being supported at all. + +EAO: So effectively, an Amount represents finite mathematical values, and it provides no arithmetic. And we have a few questions that we are primarily here to pose to you as part of asking whether you agree that this is ready for stage two advancement. We are, of course, open to other questions as well as previously identified. Particularly if they pertain to advancing this to stage two. Yeah, I’ll go through the questions and then there’s three of them that we’ve identified. And then I think, we would be happy to go to the queue and see where the conversation takes us. + +EAO: So the first question here is that, with the code here, so when we, for example, construct an Amount from a value that has a higher precision than what we say the Amount ought to have. So here we have one, we have six significant digits of precision in the value that we’re constructing from the amount, but we’re saying that this amount ought to have two significant digits. Then calling `toString` on it, `toNumber` on it, all of these would end up resulting in `'1.2'`. But with the `.width()` method if we then go and define a higher count of significant digits, the question then is: Do we build that out of the 1.2 value? Or do we build it out of the 1.23456 value that we were originally given? Effectively the question is, the last one here do we end up with 1.200 or 1.235? Our preference would be to go with the former, but we’re very interested in discussing things to go with the reasons to choose otherwise here. + +EAO: The next question we have is we want to ensure whether there would be any reason to consider currency to be special. So effectively, the unit identifiers that we use in 402 are all lowercase, whereas the currency identifiers we use are all uppercase and three letters. So it’s entirely plausible for us to differentiate when formatting with `Intl.NumberFormat` whether an amount with a unit is an amount with a currency that we support or whether it is something else. And if it is something else, then we would be using the unit formatter and then make that work. And also, there's the option names and the accessors about how to work with this. So the, does the spec text on this was—JMN, did we merge that PR or is that open? I can’t remember. + +JMN: I believe that’s still open. But I think the consensus among us is that it should be merged. + +EAO: So there is an open PR applying this change. Previously we considered currency to be special. But in our later discussions we now identified that currency ought to count like a unit like everything else. So we’re looking to drop that. But if there is a particular need to choose differently, we would be interested to hear about that. + +EAO: And then the third question that we’ve identified is that, as currently envisioned, an `Amount` would have arbitrary precision the same way effectively a BigInt has arbitrary provision, but this requires that an implementation says where the limit effectively is. We could also impose a limit explicitly in the spec for what is the upper limit of the mathematical value being supported. If we do that, we’ll need to define how that works and go from there. + +EAO: But yeah. That’s about it. It looks like there is quite a bit of a queue. Happy to start going through that. + +JRL: Okay. We have quite a queue. And I’m not sure where they quite go in the proposal. First up is KG. + +KG: Yeah. So this seems like a well thought through proposal. I appreciated the presentation. I’m still not totally clear on why it is a proposal for a language feature and not like a library on NPM. It seems like something that is useful for a small fraction of applications. But there are a lot of libraries that are useful for a small fraction of applications. You mentioned integration with `Intl`, like passing it as an argument to `NumberFormat` and so forth. But you could just pass a plain object to a `NumberFormat` that has an amount and precision and unit strings. So I don’t really see the motivation for like how that explains why something should be in the language. And I didn’t pick up much on what the other justification was. So yeah, I’d like to hear more about why you think this is important this is specified and not like a userland library. + +EAO: SFC, do you want to take this one? + +SFC: Sure, I can take that one. So let’s first just look at the slide that, that EAO has here. The internationalization solution. So there is a few different approaches we can take to solve this particular problem. One of them is to rely on strings and the previous proposal we just accepted would give strings more power than they currently have. A second is to rely on what I call the protocol approach, I think the one that KG just described, with an object with getters that obey a certain shape. And the third one is the amount. We can go through the different angles. So if we look at the string approach, strings are obviously not type safe. It is not great to have strings as your intermediate form that you’re going to be formatting with. How do we add units to strings? Do we have to parse the units out of the strings? What does this look like and mean? I think strings are not a great solution for the amount problem. We can look at a protocol. So a protocol does have some merits to it. I think one of the main issues of the protocol it is not, it doesn’t bring immutability. It is not a solution that, that—gives, that gives authors the ability to have this immutable object that they are working with and passing around. It also, you know, has—other times we have attempted this protocol question in the JavaScript language, there have been other security issues with protocols. You know, we said how we wanted to avoid having getters the call code and things like this. So like there’s fewer opportunities to go wrong with having an immutable object. A problem with both strings and protocols is we don’t have methods for setting and querying things like the precision. So, for example, being able to interact with both significant and fraction digits. These are two very common ways that users have for interacting with the precision of a value. And having an Amount constructor allows us to interact with both of those values, being able to query both of the values, convert one from the other. It also allows us to do similar types of operations with units. And also allows us to do serialization—serializing into a string and back as, you can see an EAO’s slide here. Yeah, so definitely the champion group feels that the immutable object approach is the best solution to the internationalization use case. Now, I think a good follow-up question there is like, well if it is just a 402 use case, why isn’t this specified in 402, why isn’t it just a 402 feature? Intl is already a library, why can’t it be part of the 402 library. The biggest reason is there is library interrupt. I think EAO mentioned on an earlier slide about how, you don’t need to go there, but if you want you could. About how, for example, you know, like we’re making this object to empower libraries, we’re not currently proposing a unit conversion library, we’re deferring that to a future proposal. We can introduce an amount object and have a JSX widget that uses that same amount object and to be able to, you know, format that on a screen or be able to have a unit selector. It is valuable and it is our job as language designers to introduce that interoperable layer. That’s one. + +SFC: The second one is the `` HTML element, which I think is curious there. We would like, similar with Temporal, how, we’re working on having HTML elements that correspond with the temporal types and having JavaScript that corresponds to an HTML element. I think there is a lot of really exciting design space interacting with the HTML element. And the third is the prior art, like EAO has here on the screen. Like in JavaScript we’re not the first language to introduce this type of concept. My favorite here is F#, which has like, there’s no such thing as just a number. A number always has an annotation to it. Right? So basically we’re adding in annotated numbers. So, like, having an Intl-only Amount, you know, definitely begs the question, well, why don’t we make that a 262 amount? Now, that said, I’m also happy to explore things like namespacing. So like if you think that it is better to be in some namespace, like an `Numeric.Amount` or `Intl.Amount`, if that is the discussion we need to have, that is a discussion we can have in stage two. I don’t think the exact namespace for the object is a stage two blocker. I hope that somewhat answer your question. + +KG: I appreciate the response. I have a response in turn. So you mentioned a couple of things. One of them, I guess, I just want to start by clarifying. You have the slide where you passed one of these objects into `Intl.NumberFormat`. My assumption is the way that was going to work is `Intl.NumberFormat` would read properties of the amount object. Not that it would reach into internal slots. I was assuming it would work like basically everything else in language where if you pass an object it will read properties from it. We normally reserve direct manipulation of internal slots for the receiver, not for arguments. So if it is just going to read properties from the argument, then that’s sort of—that already works without having a built in Amount in the language. Because you can just pass an object with those properties. Before I go further I want to clarify what the proposal is for how amount values would be handled, those arguments. + +EAO: I don’t think we have an exact answer to whether the spec as envisioned here for Amount in `Intl.NumberFormat` would be looking specifically for whether the value is an Amount then doing special things based on that. + +KG: Okay. Well, if we do go forward, I will express my preference for that. I think you should just be reading properties. Not internal slots. We had a long conversation about this, particularly but not exclusively focusing on membrane transparency, in the context of the set methods proposal. And the outcome of that discussion was basically things that are operating on arguments should be using the public interface of the arguments, which is to say, reading properties. So even if this does go forward, my hope is that, the way this will work is by reading properties. But if that is the case, then you can do this just as well without an Amount-like type existing in the language. You just pass what SFC was referring to as a protocol. So—that’s the first thing I wanted to say. + +KG: SFC mentioned several other reasons to care about having this in the language, such as having immutable data. I didn’t understand that point, since user code can freeze objects just fine. If you want an immutable value, you can have an immutable value. And then there was the point about interoperability, which I think is the strongest justification. Certainly it is generally the case that interoperability is an important part of language design. But things can be interoperable by convention rather than by the existence of something being standardized in the language. In particular, I think if `Intl.NumberFormat` expects an object with a certain shape, and if the proposed amount picker or `` element in HTML provided an object of that certain shape, then that would be the de facto standard shape for amount objects, whether or not there is an Amount class in the language. And from my point of view, that would be a perfectly fine state of affairs. I think that would result in code that could interoperate with other code just fine and still get NumberFormat and all of that. Assuming we lived in that world—which I acknowledge we don’t currently, but it seems like a fairly straightforward world to reach—I don’t see that much additional value in putting the rest of this library in the language opposed to in userland. + +EAO: One big reason is discoverability. If it is just a protocol supported by `Intl.NumberFormat`, then the usage that would get would be significantly lower than if an Amount actually existed as a thing that developers could find and use and benefit from. + +SFC: Yeah. On the other questions about, do we read slots or public API. We should discuss this further, but I’m pretty sure what we landed on with Temporal is that we checked the statements in Temporal object and the state of the Temporal object does determine what format we take. And guess I had in mind we would do something similar here, I think that is a discussion we can have. + +EAO: Yes, definitely a discussion we ought to have no matter what. + +KG: Yeah, and that doesn’t need to happen for stage two. Anyway, we have a very long queue so I'll stop. + +EAO: MM. + +MM: First of all, I want to agree with KG about reaching into an internal slot on a non-`this` argument, everyone here knows about now. That is a trip wire for me. We have made special exceptions detailed reasons why those exceptions are justified and not harmful. You were at TG3 and this issue did not come up. So if you’re stuck on an internal slot being examined in the position of a non-`this` argument, please bring that back to TG3. Okay. So I've got several things on the queue and I'll take them one-by-one, in order. + +MM: So please return to the slide where you’re showing the full API. Yes. That one. So I think the answer is trivial and as expected. But to make sure we all have common knowledge, can you walk through how you would expect this API would be adjusted once Decimal is moving forward? + +EAO: We would, in the constructor, accept a Decimal as a value, and we would add a `toDecimal` method on the instance. + +MM: Okay. That was it for that question. + +EAO: WH, I think. + +WH: I have gone through the spec and done a review of it and have a number of observations. The main ones are that this is incompatible with the future addition of Decimal and it is also incompatible with itself. There are a number of bugs which can be readily fixed, but there are also places where it does things inconsistently and doesn't adhere to some of the design principles that we have. One of those is that, if you’re working with arbitrary numbers, you should be able to print an arbitrary number. It’s not okay to throw when printing some numbers and produce a string when printing others. The handling of infinities and NaN is very inconsistent. Depending on where the infinity arises, it might throw or it might print `”Infinity”`, which is really weird. Fraction digits and significant digits are used inconsistently. The use of exponential notation, or when it would switch over to exponential notation, if ever, is unclear in the spec right now. And the *ToIntlMathematicalValue* problem which I mentioned earlier, has the consequence of limiting strings, BigInts and Decimals to the range of IEEE doubles, and extending that range would be a breaking change in the future. + +EAO: So just to clarify. The spec in its current state we agree is absolutely not ready, for example, for stage 2.7. We’re asking whether the spec as we are presenting it is sufficiently indicative of advancing to stage two, which doesn’t, I think, require all of the things to be fixed that you’re asking about. + +WH: Okay. Bugs I’m not concerned about. But it’s unclear on what the direction or what the intent is in a lot of the places. How are we treating infinities? Are we going to limit everything to the Number range? Those are key questions. + +EAO: The intent is that if you were to try to construct an amount from a nonfinite Number—a NaN, or a positive or a negative infinity—the constructor for the Amount will throw. + +WH: I don’t think that’s the correct behavior there. Because that violates the principle of how things that work with arbitrary mathematical values work. And it’s hard to prevent intermediate overflows. Like—for example, if you give it the value 1.797E308 and ask it to display to three significant digits, you have given it a finite value. But *ToIntlMathematicalValue* will produce infinity on it. + +EAO: Yes. + +WH: So it extremely hard for users to protect against that kind of intermediate overflow behavior. There are a lot of problems of having this throw on infinities—I would have very strong reservations about proceeding if that’s the behavior. + +EAO: The thing we are trying to build as an Amount is explicitly a thing that is, as I mentioned here, that it represents a finite mathematical value. And therefore, this leaves out infinities. + +WH: I would not be comfortable with advancing if we violate the norms on how floating-point mathematical values work in the language. + +EAO: But “mathematical value” as defined in the spec doesn’t support infinities. + +WH: Sorry, I meant how Numbers work in the spec. + +EAO: Numbers, absolutely. But mathematical values and numbers are different in the spec. + +WH: A user should be able to just take any existing Number and print it without worrying about that throwing. They should be able to take any existing BigInt and print it without worrying about it throwing. We’re violating that expectation right now. + +EAO: A BigInt? + +WH: A BigInt or a number. + +JHD: Sorry to interrupt, but if the BigInt is too long for the implementation, wouldn’t it throw anyway? + +WH: I’m not worried about that. + +JHD: Okay. + +WH: Yeah, I’m not talking about memory limits and stuff like that. + +EAO: So do I understand correctly that you’re—that in this aspect, your concern is that you think if an Amount were to exist in the language that Amount should be able to represent infinite values? + +WH: Yes. You already have such cases because rounding like *ToIntlMathematicalValue* can turn finite values into infinite values—there is no easy way for our user to prevent that. + +EAO: Okay. Do you also hold that an amount should be able to hold a NaN value? + +WH: Yes. + +EAO: We should have a discussion offline about what would be the user-beneficial ways of dealing with amounts that represent nonfinite mathematical values. You would be alright for us to have that discussion and advance with the queue? + +WH: Sure. I also want to emphasize, the proposal as it is right now is incompatible with the future addition of Decimal because of the *ToIntlMathematicalValue* range limit. + +EAO: The current range limit value in ToIntlMathematicalValue is, I think, not changed by either the previous proposal or this one. But if Decimal were to be introduced, the expectation would be for the decimal proposal to increment the *ToIntlMathematicalValue* maximum range. + +WH: We need to do that as soon as possible in that case, if that is in the spec already. Because that is a breaking change. + +EAO: But it is not a breaking change within this proposal. + +WH: It is a breaking change, because you can always feed strings or BigInts in there, which have the same issue. + +EAO: So wait, wait. You’re saying that the issue that in *ToIntlMathematicalValue*— + +WH: That arises actually for, for strings, it arises for BigInts, it arises for Decimals. It even arises for finite numbers which are close to the maximum of the range. + +EAO: But you’re saying that this bug already now exists in the Intl spec or it is something that is introduced by— + +WH: It is something we need to fix as soon as possible. + +JRL: So, I think he is asking is this in the current `Intl.Number` or currently in amount. If we introduce amount does that create the bug or right now is it in Intl? + +EAO: That’s a question to you, WH. + +WH: I don’t know where this came from. A question to you? + +NRO: The answer to the question, it is not in Intl. Today in Intl, you can pass a string that is in decimal limits but it is rounded to infinity. So the answer is it like this right now. + +WH: That’s a serious problem. We need to fix that. + +JMN: Just to reiterate what EAO was saying, part of the decimal work will be also to adjust the limits. The intention is everything that we introduce should fit into the Intl space. That would be pretty bad if things start overflowing or you get infinity from a finite value or things like that. + +WH: If you can pass a string with a finite value now which is within the Decimal range and have Intl silently turn it into an infinity, then we designed ourselves into a corner and we need to get out of that corner as quickly as possible. + +JMN: I think we can get out of that corner. I think we can discuss this and find a way forward. I think everyone agrees we should find a way forward. + +WH: I have not been paying as much attention to the Intl spec. We need to fix this. + +SFC: Just very briefly. In 2022 we already extended the range of supported strings to be greater than it was before. And that’s when we set this fairly arbitrary limit we currently have. And EAO’s proposal that we approved earlier, also extends the capability of string `Intl.NumberFormat`, prototype format. So I don’t expect that increasing the range further would be a web in compatible change, and browsers don’t the implement this consistently, Firefox implements the spec. Chrome doesn’t implement the spec. I don’t think there will be any problem here. + +WH: I hope you’re right. I just don’t know. + +EAO: WH can you clarify if this is the issue that you also had with the previous proposal’s advancement to 2.7? + +WH: Yes. + +EAO: We identified this is a preexisting problem we have in the spec. Do you still consider it a blocker for the 2.7 advancement of keeping trailing zeros. + +WH: I haven’t noticed this before. I didn’t know whether it was in the existing spec. I first saw it in the text of the proposal. And it is a problem which we need to fix. + +EAO: Okay. But. + +WH: I’m really not picky about, you know, the process here. But we need to fix this. + +EAO: Okay, but can we consider that fix to be separate from the keeping trailing zeros proposal and from the amount proposal? Or do you think it needs to be interlinked with these proposals? + +WH: I’m not picky about the process of how we do it, but we need to fix this. + +SFC: I can come back with a normative pull request next meeting that fixes this, if that helps. + +EAO: Cool. + +JRL: Okay. Up next a different topic from MM. + +MM: Yeah, can you return to the slide with 1.235? Yes. Exactly this one. So this is a perfect example for asking the question that you’re asking here. I chose the term “strongly prefer” in my topic. Strongly preferring 1.235 for reasons I’m about to explain. But first I want to clarify my language. "Strongly prefer" means I do care about it, but it is not a blocking concern. I would support this going forward to stage two, even if we ended up with the other. Okay. + +MM: That said, the way I think about, okay. Now, please go back to the API. Yes. Thank you. So, there’s two ways to think about amounts. And there’s one way that I find much, much more intuitive. That the Amount object represents, at its core, a mathematical value with units. And that the fractions or significant digits and rounding mode all has to do with the rendering of the mathematical value into a string. And therefore the values of fractional digits or significant digits and the value of rounding mode should not affect what mathematical value the amount holds on it and it should not affect any of the observable behavior of API other than the API is the traffic of string renderings. I can talk through how I would think about it if the decision went the other way. But I just find that less natural and, I expect, less useful. + +EAO: So, MM, just to clarify, do you mean that in, for example, here on the last line, that `.with({ fractionDigits 2 }).toNumber()`, that number value should still be 123.456? + +MM: Yes. That the rounding is not rounding the mathematical value. It is only affecting how it gets rendered into a string. Exactly. + +EAO: Okay. + +MM: Once again. I’m not blocking if we decide on the other. + +EAO: WH? + +WH: I’m taking the opposite position quite strongly here. There’s more in the issue I filed, but the accessors to read significant digits and such are not even in the spec. I view this as rounding on the way in. What you get is a mathematical value with some precision and the methods you can call will affect how it is formatted. You can round it again if you want, but the original value is lost. And that’s a cleaner design, because the original value can take a number of different forms. It can be a BigInt, a string, maybe a Number or Decimal, and you don’t want to hang onto those. + +MM: Okay. So, that’s a good argument. I’m somewhat persuaded by the argument. Let me make a further argument on the other side which is go back to the API please. So until you bring in precision and rounding, and rendering to string, that the rest of this is, and the notion of mathematical value certainly is not in any way biased towards base 10. You know, math doesn’t recognize a base as fundamental to a mathematical value. The idea that, certainly once you bring in rounding and precision, the precision is explicitly in terms of the base 10 rendering, and it seems a shame to make base 10 privileged at a deeper level. + +JHD: So I think I’m next on the queue as a reply. There is a point of order that 10 minutes are left, so I’ll try to be brief. So I think that with the current design it must round on the way in specifically because it is holding state. In other words, the number of significant digits, fraction digits, etc. If it is meant to represent a precisionless mathematical value then it would only take, then it wouldn’t be call an amount, or it would be called mathematical value or something, it would only take the value already, it would not take any options, you prow vide all of the options you needed at rendering time each time. It really doesn’t make sense, to me, to look at these as statefully held rendering defaults. Yet, in other words, if you imagine you’re starting with a BigInt. And you pass it into an amount. You already have a BigInt. Like, okay. Maybe BigInt is a bad example. Imagine you have Decimal and pass it into the constructor, the Decimal is the closest we have to mathematical value in numerical form. If you are choosing, like if you’re trying to hold onto that conceptual value and then only truncate it on rendering. You would have Decimal prototype method said in this hypothetical to do that. The point of passing it into the Amount, to the truncation and have a thing to pass around that is the pre-truncated thing, and the method options do further truncation on render. So conceptually, if we are looking for an untruncated mathematical value, we need a different thing like Decimal or BigInt or Number, we need a fourth thing to represent that. + +MM: Okay. I think between all of these arguments, I think I’m persuaded. I retract my strong preference otherwise. + +EAO: Cool. + +JRL: We do have a few more replies that I think are going to argue against your old position, MM. NRO. + +NRO: I'm going to skip discussion in the interest of time. + +JRL: Okay. Sorry, I’m going to take my spot, though, it seems, can you go to question two slide? + +EAO: Say, again. + +JRL: Can you go to question two slide? Yes. This it exactly. Not currency, the one before this, how should we round. I think this is question two in your list of open questions. So we have a value here that is conceptually six digits going in, we’re saying significant digits are actually two, I can only copy the 1.2 value. If I later say that my significant digits that I want to print with are four, that seems like an error case. That’s value is not computable to four significant digits because the value that was represented there was not calculated with that in mind. Whatever, like if I’m doing division and that division only had two significant digits and that rounded my sig figs down. Trying to later print that with more sig figs implies considerably more precision then what the value was calculated with. This seems like an error case that should throw instead of printing normally. Like you can reduce your sig figs, but you can’t increase them. + +EAO: That is a good point that I at least have not considered. Would you be willing to file an issue on the proposal repo so we can continue discussion on that one? + +JRL: Certainly. + +EAO: I don’t think I have a decent response on what is the right thing to do there. + +JRL: Okay. Shane. + +SFC: JRL's suggestion, I did suggest that, I will consider that more. Mine is about 1.235. But I’m going to skip that for sake of time. + +JRL: Sorry, WH, you are up next. + +WH: It is very common to display things with more significant digits than you’re given. If you give it the value 1 and ask it to print it with two decimal places it should print “1.00”—it should not throw. + +EAO: I agree with that when talking about presenting, about when formatting a number. But as envisioned for an amount, we are envisioning it to represent a value that is then being formatted. And here we have two kind of different meanings of significant digits and what it is actually inherently describing the value that would be an amount and then separately the significant digits that are used for presentation. But I think, because this is a novel idea for me, this is why I was asking, let’s take this discussion offline to an issue in the, in the proposal repo. Because we need to sort that one out. Whether we do actually maintain some semblance of difference of whether the "amount" significant digits and "formatting" significant digits are treated the same or different. + +WH: Okay. You will run into a lot of landmines if— + +EAO: If—if we can show that that would cause us landmines then it becomes relatively easy to have that discussion to support one way or another. But it is important to have the explicit discussion than not. + +WH: Yeah, I prefer to keep it simple. + +JRL: So we have four minutes left. + +MM: One remaining item, which is currency. We might make this quick. If nobody feels strongly that currency should be included, we already had a discussion in TG3 where I thought we all agreed to omit currency and just use units to cover the currency case. So if currency is dead, then I don’t need to ask my question. + +EAO: That is effectively the current state of affairs. + +MM: Okay. I’m done. + +EAO: Cool. Jesse’s reply is about this as well. So, one thing that I would like to note that I don’t think has really been raised directly is our third question here about whether we ought to in the spec impose limits on the numerical, the mathematical value in amount or whether we should leave it out of the spec and effectively follow the example of BigInt where it is an implementation defined—yeah. But as, NRO points out on the queue, that is something to discuss at stage two. That we’re at time. Regarding stage 2, WH, do you have blocking concerns? Or are those concerns that you have, can they be discussed in stage two? + +WH: It depends on what the position is on dealing with infinities. If we’re willing to have this pass infinities and NaN through without throwing then I’m fine it with being in stage 2. If it seems unlikely we will allow that, if we’re throwing on infinities then I don’t think we should do this. + +EAO: Then I think we should have an explicit discussion separately from this about infinities in amount. + +WH: Okay. + +EAO: Because this is literally one of the cornerstones of what we have been building and if we break that, we need to make sure that this makes sense and we’re not causing other issues elsewhere. + +WH: Okay. + +JRL: Sorry, we have one minute left. I’m going to control just a little bit here. It looks like we have two replies saying that rounding to infinity isn't intentional and they are going to discuss this. Let’s move this to the issue tracker as well. + +EAO: Yes, WH, please file an issue adding support for infinite values. + +WH: Already did. + +JRL: Okay. We have one last reply, I’m sorry one last topic from MM saying +1 if no implied access to internal slots of non-this arg EOM. There is a little bit of discussion in the matrix channel. MM if you can also add an issue on the tracker. That way we can wrap up for today. + +MM: Okay. Great. + +JRL: Okay. Perfect. EAO, you had already promised to write a summary for the trailing zeros. I can also get you to write a summary for the amount proposal. + +EAO: You mean this one? + +JRL: Perfect. That will wrap us up for today. Let me— + +KG: I’m not clear on whether this got stage two. + +EAO: This did not get stage two. + +KG: Okay. + +SFC: It didn’t get stage 2, why? Because of the WH’s concern with infinities? + +EAO: Yes. + +SFC: That seems like a problem we can answer like after we get to stage two, though. Like— + +WH: Okay. Well the criteria for stage 2 is whether we think we’re going to include something in the language. Now, if this thing mishandles infinities I don’t think we should include it in the language. + +SFC: Can we—can we have a stage two on the condition that it handles infinities? The same way that Intl mathematical values handles infinities. So Intl mathematical values and in addition .. customize infinity and snap, which is what intl mathematical value does + +WH: As long as it doesn’t throw I’m cool. + +EAO: I’m not necessarily okay with that. I want to consider this question with time and not have that be rushed. Let’s discuss it offline and come back at the next meeting. + +JRL: Okay. So we’re not going to stage two today. There is an objection to stage two at the moment. But the champion is not asking for stage two regardless. Let’s take this into it issue tracker and wrap up for today. And I don’t believe— + +MM: Issue tracker, you’re talking about issues on the proposal PR. + +JRL: Sorry, yeah, on GitHub. + +CDA: On the measure proposal repo. Are we using number 48 with WH’s July review issue. + +EAO: No, please file individual issues for the topics raised. Despite not reaching stage two, I think we do have implicit approval for renaming this as the Amount proposal. But that can be sorted out later. + +JRL: Okay. I, that wraps us up for today. Unless someone else from the chair group has something to discuss. I think we are done. See you tomorrow. + +### Speaker's Summary of Key Points + +The scope of the Measure proposal is reduced to leave out compound units and unit conversion. The proposal is renamed as the Amount proposal, to match the name of the new class it introduces. It holds a mathematical value, an indicator of its precision, and optionally a unit identifier. The value is made available via toBigInt(), toNumber(), and toString() methods, and is formattable with its toLocaleString() method. + +The initial spec text is ready, and the committee was consulted on their views on some open questions: + +* Rounding should happen during construction, not allowing the original value’s precision beyond what it’s limited to by the constructor options to be accessed later. +* Currency should not be treated as special, and currency identifiers should be considered valid units. +* The min/max limits on Amount values continue to be a discussion topic. The currently proposed spec text does not impose any limits. + +Beyond the above, WH raised a blocking concern regarding the support of non-finite values in Amount, which currently throw during construction. KG expressed some hesitation about the proposal being sufficiently motivated for inclusion in the spec. KG and MM expressed a strong preference to avoiding any Amount internal slot access from `Intl.NumberFormat` during formatting. + +### Conclusion + +The proposal was renamed, but did not advance to Stage 2. Discussions continue on the open topics. diff --git a/meetings/2025-07/july-30.md b/meetings/2025-07/july-30.md new file mode 100644 index 0000000..c334ed4 --- /dev/null +++ b/meetings/2025-07/july-30.md @@ -0,0 +1,1008 @@ +# 109th TC39 Meeting + +Day Three—30 July 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|------------------------|--------------|--------------------| +| Dmitry Makhnev | DJM | JetBrains | +| Waldemar Horwat | WH | Invited Expert | +| Chris de Almeida | CDA | IBM | +| Jesse Alama | JMN | Igalia | +| Daniel Minor | DLM | Mozilla | +| Samina Husain | SHN | Ecma International | +| Aki Rose Braun | AKI | Ecma International | +| Shane F Carr | SFC | Google | +| Olivier Flückiger | OFR | Google | +| Jordan Harband | JHD | HeroDevs | +| Zbyszek Tenerowicz | ZTZ | Consensys | +| Eemeli Aro | EAO | Mozilla | +| Tab Atkins-Bittner | TAB | Google | +| J. S. Choi | JSC | Invited Expert | +| Istvan Sebestyen | IS | Ecma International | +| Daniel Rosenwasser | DRR | Microsoft | +| Rezvan Mahdavi Hezaveh | RMH | Google | +| Kris Kowal | KKL | Agoric | + +## Opening & Welcome + +Presenter: Ujjwal Sharma (USA) + +USA: Hello. And welcome. While we wait for our facilitator for today to start with the session, I think it is time we kick things off with asking for notetakers for today. I did confirm that Carrie is here, hi, Carrie. So I’m assuming that we have the notes. Who will help me out with fixing them? We need two volunteers for this session. Is that someone volunteering to help out? + +DLM: I’m sorry, I just joined, I’m sure if we called for notetakers yet. + +USA: I just did, Dan, but no luck yet finding one. + +DLM: I guess I can then ask for voluntary notetakers. Is anyone interested in helping us out today? + +NRO: I can take notes for the first topic, but I need someone else to do it for the next one. + +DLM: Okay. Thank you, NRO. We would probably get, ideally, oh, definitely one more person. And ideally a third person to take over when NRO has to stop taking notes. + +RGN: I can take over for NRO in the next session, but not this one. + +DLM: Okay. Thank you. I think I have to reload my TCQ as well. Call for notetakers. Okay. Perfect. So one more notetaker would be very much appreciated to make sure that we have enough coverage. + +NRO: Maybe we can start and just interrupt me. + +DLM: Sure. Thank you, NRO. So straight over to you. + +## Intl Era and month code + +Presenter: Ujjwal Sharma (USA) + +* [proposal](https://github.com/tc39/proposal-intl-era-monthcode) +* [slides](https://docs.google.com/presentation/d/1dAbacNvhPL_iUJKZNPDbQVyfg8a6-OHUuh2OiftMu1A/edit?slide=id.p#slide=id.p) + +USA: Thank you, NRO and RGN. All right. Let’s start with Intl Era and month code for stage 2.7. This is by me and Philip, but Philip cannot be here today. Both from Igalia. And thanks to Google. So first of all let’s talk a little bit about the proposal. Where it is. Why is it there in the first place? If you forgot that. So, there is a number of non-ISO8601CLDRs that are used in JavaScript. This primarily started being used in Intl format where you are allowed to format any given date object. More specifically at the beginning now, any temporal object as well. Into a non- ISO8601 calendar. That means the Gregorian calendar, the calendar that we’re all familiar with with a few tweaks that make it suitable for computational use cases. Every other calendar is real human calendar that is used not by computers, but by people in their day-to-day lives, sometimes for traditional or religion purposes. So, the behavior of these calendars has already, has always been effected outside of JavaScript, it is beyond the scope of the ECMAScript programming language itself. In practice it means the details are defined by CLDR, by unicode, and one of the implementations of CLDR that is used by the Ecma script implementations ICU4C or ICU4X. And Ecma 402 that is doing in… APIs should not be in the arithmetic for specifying for it every calendar. It is not what it does, and the details have always been referred to places like ISO or C DR. However, now the Temporal has added the capability of doing arithmetic on dates in any of these calendars, it does—like—give us some responsibility in terms of providing some guardrails to make sure that developers can utilize the APIs without varies of break-in, without implementation divergence. Basically do whatever we can sort of limit the behavior or to define like what the correct behavior looks like without overspecifying. So that’s the premise for this proposal. + +USA: Talking about the scope. Ecma-262 is only requires the ISO 8601 calendar that is completely specified in the temporal spec to be merged into the 262 spec that is the only required calendar, but may support other calendars if they would like to. This doesn’t change. This has been the understanding from the temporal proposal and this sort of has been something that we carried over to this one. Ecma-402 conformant and implementations however are required to support the set of calendars that are specified or that are defined by CLDR. This was the common understanding. This is more or less the entire motivation behind the proposal. These calendars needed to be implemented by implementations but didn’t actually have sufficient depression and this common understanding is now formalized. So this is, why, the core sort of reason for these existence of this proposal. + +USA: What precisely does this proposal add to ECMA-402 to go over a non-exhaustive list. The description of each supported calendar and remember, these are the international calendars barring the existing sort of ISO8601 calendar. So it is clear what is the calendar under question when your implementing something? To clarify there is some ambiguity sometimes when it pertains to certain calendars and the way that we specify and describe calendars while we haven’t defined exactly every single tiny detail it does help you to tell which sort of version of the calendar you’re supposed to implement. Also, lists down the valid era codes. So, these are era codes that are going to come from the users and therefore, you know, more strict behavior is required to sort of work around these and not only did we define these era codes we actually went back to CLDR and upstreamed sort of the results of our discussions over there. Thanks, SFC for that, and referred to CDLR where they are, actually standardized instead of being the source of truth by ourselves. We also include valid ranges for era years for calendar. Like in the Japanese calendar, the Shobha era, for instance, might have an upper cap as well as a lower cap. So all of these are included. List of epoch years for calendar for temporal plain date for type year. So if you take a temporal PlainDate instance in a different calendar and get a year getter, you might need the epoch year, because it is always relative to that sort of epoch year instead of, you know, being relative to year zero in ISO, every calendar has their own starting point. We list down the starting point so it is easy to sort of compute. It agreed upon between implementations. Specifics about which calendar support eras and weak numbers. Specifics how to make the methods for eras which can start mid-month or mid-year even. Constraints for adding years. So, for example, we have an open issue that you may see in the, in the repo where is unclear within even the academic or sort of circles which sort of define these, this kind of behavior. What is the result of adding one year to a date in month 05L in this year 5784? because the following year is not a leap year, so therefore it won’t have this month. So this kind of difficult problems we are trying to sort of basically figure out what’s the best way to approach for this and to provide a consistent behavior to the users. And all of this is currently written down in prose. So while it is understandable for any implementer what the general direction is supposed to be, 69PR is going to change it to spec step so that it’s probably easier to annotate in the implementation code or, you know, to follow. + +USA: The algorithm for taking the difference between two dates regardless of the calendar as well as the handling of Hijri calendars, some of them can be more reliable than the others. So, all of these different things. + +USA: Then, basically, I was going to summarize this has been sort of the answers of different difficult questions that we have discussing in PG2 and sometimes beyond just PG2 in unicode and elsewhere and get the answers. So temporal has been sort of the forcing function to help us get consensus on a lot of these details and to translate to either spec text or sort of upstream sources that we can point to in the spec. So yeah, this proposal’s basic goal of defining a lot of the behavior for non-ISO8601 calendars for implementation in JavaScript has been successful. We think that this is a good balance. If you have gone through the spec you can see that it might feel like we’re threading a line here. We’re not overspecifying stuff that ECMAScript is clearly not the authority on and it is unclear who the authority is, we’re trying to sort of give guidance as to what the correct behavior is and still minimize the opportunities for implementation divergence while keeping the door open for either future improvements or, you know, as in like more correct implementations or improvements in the calendars themselves as well as, you know, implementations with reasonable divergent sort of different approaches. + +USA: First of all, I would like to open the floor for questions. Any discussion regarding this proposal or anything that is presented so far? + +DLM: There is nobody on the queue right now. + +USA: Fine. Let’s maybe give it a minute. Let’s just see if—somebody has something. Perhaps not. Then I can move onto the next slide and that could be what attracts people’s attention. Going through the stage 2.7 requirements we have the syntax and APIs are completely describe. That has been the case. As I mentioned, these PR69 does convert some of these spec texts from pros, which explains what the direction should be into actual algorithm steps. However, the intent, the semantics and, well, there is no syntax or APIs for the most part, that will be introduced per se with this proposal. So yeah, like that, that is something that we have agreement on. The current reviewers have signed off on the current spec. So EAO and SFC. Thank you for your review. They have indeed submitted issues and given us a lot of great suggestions for improving the proposal. I left their status as pending, I will let them answer to what their final verdict is. I also added Monish, I’m unsure if he is a delegate at this point. But to give credit where it is due. And it also went through the whole spec and being an expert in these areas as well. Gives me confidence. Then finally, we have the relevant editors group. I’m one of them, and BAN is on leave. So I think RGN is our sort of best shot at taking a look at the spec and letting us know if what he thinks. And yeah. These are sort of the requirements. We can go through, well, let’s see if we have—okay. EAO? + +EAO: Yeah just figured I would voice my agreement that I agree with this spec. And I think this should advance to stage 2.7. One small callout that was added as a result from my review and as a follow-up from the discussions in TG2 was adding a statement that an implementation should emit a warning if it’s capable of doing so if the user uses the “islamic” or “islamic-rgsa” calendar identifiers for which we do fallback. I believe this is the first time we’re adding a recommendation to emit a warning to 402. A couple of these already exist in 262. + +USA: Thank you, EAO. Seeing in the queue. There is nothing else. Perhaps I can also then call out the other SFC, would you like to voice any thoughts on this proposal? + +SFC: Yeah. I think that the, the work that PFC has put into getting the spec into shape is very good. I gave the update the last update in plenary for this proposal, I think, maybe two cycles ago. And basically the, most of the changes from then to now, now, we have spec texts to codify the things that I had presented on when I gave this, when I gave the update two cycles ago. I did my stage 2.7 review. I’m quite happy with the spec. I do have an open pull request with some suggestions from my review that basically what it does it converts some of the prose that PFC had written into actual lines of spec text and then does some refactoring in order to make that work. So I approve 2.7, I guess, conditional on the edits that I made in my pull request. If that’s a thing that is possible within the, within the process. And thank you RGN and USA and make for doing it as well. + +USA: Great. Thanks. RGN, if you are around by any chance, I would finally love to hear any thoughts that you have. + +RGN: Overall, mostly positive. But as noted, my review is still pending. So I don’t have much to contribute right now. + +USA: Right. With that, I believe, we’re still missing an explicit editor’s signing off. So I think we should sort of keep that in mind. Obviously, so change my request to conditional stage advancement, it sounds okay. So I would like to do a call for consensus for stage 2.7 conditional on approval from one of the editors from Ecma-402. + +DLM: Are there any explicit voices of support? With my SpiderMonkey at on, I can say we would support this for additional advancement to 2.7. + +USA: Thank you, DLM. + +SFC: Is the conditional advancement conditional on PR69 being merged? + +USA: I—didn’t explicitly ask for that. I think do you feel strongly that the advancement should be conditional on that PR being merged or. + +SFC: Well if the advancement is not conditional on the PR being merged the PR would need to come back, I mean, it’s—it’s—like—I’m not sure if it is editorial or not because it is taking like—pros and converting it to spec steps. Like, technically if it is 2.7 without this PR then the PR probably wants to come back next meetings a like a stage 2.7 normative PR or something just to be safe. It would make me more confident to make the 2.7 advancement conditional on 69 as well as RGN’s final sign-off. + +USA: I—to sort of answer to one of the points that you made. I do feel like the PR or at least the intent for the most part of the PR is editorial. So I feel like not only is it my perfectly fine to do it after stage 2.7 because it is basically taking what it already described in a different format and describing it in the algorithmic format, but also I don’t understand like what would be the implication of that if we, for instance, had, you know, some changes to the PR like that would still also be editorial. Right? So can we have like conditional stage advancement or something that might further change editorially? I’m not sure. But I’m also not opposed to that. So. + +SFC: All right. I’m happy with 2.7 conditional on editor review and understand that 69 is editorial in spirit. + +USA: Yeah, I do mention it in the next presentation where, yeah. The next steps for us would be to take the semantics defined in this and write up test262 tests for this as well as do the editorial polishing. Sort of alignment with Temporal which would imply PR69. + +DLM: Okay. Let’s hear voices of support. Is anyone opposed to this? Okay. Congratulations. Conditional 2.7. + +USA: Good. Thank you so much, DLM, and others for contributing. I’ll see you later. + +### Speaker's Summary of Key Points + +* The Intl Era and Month Code proposal was presented to the committee for advancement to Stage 2.7. +* A brief description of the proposal was prescribed including the scope and rationale. +* There was discussion regarding PR69, which changes the spec text to transform some sorts of the spec from prose to algorithmic spec steps. +* Both of the assigned stage 2.7 reviewers, SFC and EAO, have signed off on the spec. + +### Conclusion + +The proposal conditionally reached Stage 2.7, dependent upon final editorial approval by the ECMA-402 editors. + +## Module Import Hook and new Global for Stage 1 + +Presenter: Zbyszek Tenerowicz (ZTZ), Kris Kowal (KKL) + +* [Module Import Hook proposal](https://github.com/endojs/proposal-import-hook) +* [new Global proposal](https://github.com/endojs/proposal-new-global) +* [slides](https://github.com/endojs/proposal-new-global/blob/main/slides/slides.pdf) + +DLM: Okay. Perfect. So we do have another notetaker. If you’re comfortable working alone we can continue. Okay. I will take that as we can continue. + +KKL: All right, I’m KKL. And the presenter for the presentation is ZTZ. I think we all know that some categories of proposal are harder to pitch here, the easiest is to add an API, and slightly harder than that the evidence suggests to add a little bit of syntax to the language. I like to think the hardest proposal to pitch to TC39 is this specific proposal. What we wish to convey here is that we’re very sensitive to the history of proposals that have necessitated new categories of global object and the introduction of hooks for host behaviors of which this proposal does both. We strived, though, to find ways to both make the risks less and the motivation greater. And this is the resulting proposal. + +KKL: Our central motivating use case, we are profoundly aware is not germane to the bulk of the modern web, but it is increasingly important as it gives users and platforms a way to stand to defend against supply-chain attacks and the sandbox to the growing industry of AI. To that end, we isolated what we believe is the most broadly useful core that enables us to implement what remains of the Compartment proposal from which we carved off evaluators and now just this import hook and a new Global proposal. There is a case to be made for bringing that proposal forth later in terms of providing a higher level API that is more straightforward and easy to use. We don’t want to get bogged up into the details of that given it is not germane to the broadest category of web applications. Nevertheless, our core motivating use case, New Global, is part of a complete sandboxes-within-sandboxes breakfast, which is to say that the confinement properties of that proposal are contingent on doing additional work in user space that we know is tricky. + +KKL: With that, I would like to introduce you to ZTZ. Also known as Naugtur, to show you the new Global proposal. Lest we are accused of bringing a freshman delegate in hopes you go easy on him, please, by all means do your worst. He can definitely take it. And a word on pronunciation. I have made many mistakes in the past, ZTZ is not particular about his pronunciation, but grace is indistinguishable with pride. So, ZTZ’s true name can only be uttered by a 56-K baud modem. He is known by Zbigniew (mispronounced) to his mother and country. He is called Zbyszek (mispronounced) by his neighbors. His pseudonym is pronounced according to the Elvish Quenya language—which is to say that only our Finnish delegates are likely to pronounce it correctly. So ZB (pronounced “zee-bee”) will suffice. So ZTZ, please. + +ZTZ: Thank you. Yes. Let’s try throwing beginner’s luck on the problem. I’m going to introduce you to the New Global proposal. Let’s start with the problem statement. I hear that’s something the committee likes. We want to offer a minimum addition to the spec that we believe is sufficient for implementing various ideas about isolation that are more light-weight in comparison than other proposals we want to compare this to. And also sufficient for implementing Compartments in user code. + +ZTZ: I’m making more pauses by the way, which I know might seem unusual, for the sake of notetaking. I’m trying to make this a practice. All right. Motivation. We have a bunch of motivating use cases. They are not ordered in any way particular. The top two are domain specific languages and test runners, which go to together for the sake of having the about to create the `describe`, `before` and `after` and so on. Specific functions or any shapes of a domain-specific language for testing. And test runners themselves are not only using that but are using isolations for different other reasons which I will discuss further in one of the slides coming up. We also consider a motivating use case being able to shim built-in modules. So running in an environment that’s not Node.JS and also letting the code that runs for the sake of everything or system use built in modules, node JS built in modules is becoming increasingly common as a use case. Our oldest use case is the overall principle of least authority which is the compartment implementation and adjacent solutions including LavaMoat, which is a project I’m coming from. We also consider emulating another host. An interesting motivation. There has been prior art for doing that, although it always relies on building new realms and sometimes more than that. And last, but definitely not least, on this list is isolating unreliable code, which nowadays means letting people generate source code and push it to production without reading. Which is becoming increasingly common no matter what we think about it. + +ZTZ: Now a quick look at the design of the proposal. What we want to do in most broad terms is to introduce a level of the direction between execution context and realm. We call it global. It intercepts or relieves the realm of its properties that are `[[GlobalObject]]`, `[[TemplateMap]]`, and `[[LoadedModules]]`. And I’m realizing I said properties, which is incorrect wording. Apologies. The idea is there could be multiple globals within one realm containing those three. And most of the intrinsics remain the same. I need to say most of, because there’s a very small distinct set of intrinsics that will need to change between globals. We will get to that. This is the only remaining language mechanism that we need to introduce to be able to isolate scripts and EcmaScript modules equally, with assistance of the user code being enough for creating complete within-realm sandboxes. + +ZTZ: Okay. Now, this slide I wish to start by thanking MAG who started asking questions around this topic of how it compares to other proposals, mainly ShadowRealm. I’m also aware that some of the hosts are now exploring making isolate factories available which is kind of adjacent and it is worth mentioning for this comparison. This proposal, compared to ShadowRealm or isolate factories, is intended to be smaller. We want to minimize intersection with the web standards and spoiler alert, I will ask multiple times for feedback and help on driving this in the direction of still minimizing this intersection because we are aware that there might still be some. This also enables isolation that doesn’t undermine synchronous communication and shared prototypes. And in this point, I need to mention identity discontinuity as something we want to avoid. It also avoids introducing new kinds of global objects, and instead it enables creation of shallow copies into a new global object within the same realm. It aims at not introducing new concepts into the global scope for representing the primitives for isolation. And while use cases may be similar, this proposal being same realm and avoiding duplicating large objects should allow finer grained isolation at the same or lower cost in terms of specifically memory usage. Eventually more. + +ZTZ: Now, back to the examples. There’s the first example of test runners and their DSL. This example is way too short, a complete example how a global would be useful for this use case would not fit on the slide. This is just to surface that defining a specific functionality for the DSL within one global and orchestrating the set up of the test runner, so that the test code runs under that global and then the application code being tested runs under a different global. Which would require some fiddling with the import hooks, that we will demonstrate later hopefully. Would solve a lot of issues that test runners are facing. Some of the test runners are using the Node’s `vm` module for partially working through this use case. Some do not. And as a result, a dependency of the application from Node modules could describe the `before` and `after` of the test runner to add tests to your test suite, of which we generally think about on a daily basis. + +ZTZ: Now, let’s look at the isolation and AI. And here is where I want to specifically call out KG, we did get feedback from KG early about this. That various levels of sandboxing might not be necessary. Oh, I can see that this feedback is also on the queue right now. I want to still go through why I believe it would be useful. So AI agents generating code are a thing that only increases in popularity. And actors producing code within one code base are supposed to align on intentions, which in my case, is always a myth. Even people in various teams on the same organization were hard to get fully aligned on the intentions of what the application is going to do. Now, with AI coming into the picture, code generated by AI is immune to many ways in which humans align themselves on common intentions. Which is why introducing the New Global and freezing some of the intrinsics would create the most basic level of isolation that would allow us to encapsulate AI code to avoid some very simple and, I’m afraid, common issues where it could come up with names for global variables that happen to collide with code generated elsewhere by the same AI. And some of the freezing of intrinsics or other intrinsics treatment that new Global would allow would also help present misguided attempts of polyfilling JavaScript inline and affecting the rest of the application. I believe it doesn’t have to be the isolation on the level of providing full security like a compartment would to contain the impact of faulty generated code. The AI does not need to be aware of the context in which it is running. And the intention with new Global, and the possibility with new Global, is to create an environment in which generated code would be allowed to work and behave just as it would without any isolation, because we don’t have identity discontinuity, because everything can communicate with the outside world within one realm as if the code was running normally. So that’s the intention of partial isolation which I deliberately did not call sandboxing. + +ZTZ: A visual overview of what this would mean. The application sources are in one bubble. We have to be intentional about polyfills that we desire to include. And do it before we freeze everything, for example. And then every other bit of code that comes into the application can be isolated on various levels. So multiple autonomous AIs building pieces of the application could work in various levels of isolation. We can isolate npm dependencies, and even fulfill the use case of LavaMoat to isolate every npm dependency and orchestrate how they import each other. And so on. + +ZTZ: To another use case. Emulating another host. I’m aware this use case was brought up unsuccessfully before. But since then we had a bunch of progress in that area. We now have webcontainers.io, which I think it is now two years old maybe, which is a place where you can run Node.JS in the browser. That’s much further than a new Global could go. But we can offer a much more lightweight and sufficient emulation of other hosts within the browser and vice-versa. Which is useful for web IDEs. It is also useful for using Node.JS build and seamlessly run code people can pull from NPM. We can also have it work the other way around for DOM emulation in test runners. + +ZTZ: Now to the more detailed design. The interface of the global constructor accepts three optional configuration properties on an options bag. First one is `keys` which is an optional list of the properties that will be available on the new Global created by this constructor. By default, we intend to shallow copy everything including items that are keyed with symbols, etc. But to avoid, and I will elaborate on that later, inserting opinions on minimal sets of globals into this proposal, we want the opt-in to limit which globals are going to be available to be all-or-nothing. So we can get everything or we can get nothing plus only the things that are listed as keys. And there’s `importHook` and `importMetaHook` which we will discuss later in part of this presentation. The things that are different are the unique properties of the resulting global, which is another global constructor that lets you construct new Globals copying from the global that was created before. `eval`, obviously. The `Function` constructor, which does share the `Function.prototype`, but the constructor itself needs to be separate for the sake of having the internals pointing to the new Global. The same for other types of function constructors. Yeah. And the way you pick which global this inherits from is by choosing which global constructor to take from which existing globalThis. + +ZTZ: No basic properties of the design. It is a minimal change for implementing user code. We look at the old evaluators proposal and had a realization that an object that conveniently contains all of the evaluators already exists. It is the `globalThis`. So creating a constructor for that would simplify the change. We are avoiding adding new concepts and we are reusing the existing concept of a global. There is also no new categories of global objects. They don’t vary in any way. They are just replicas of the original global. The host still is the one creating the global objects. Which I know at least in some implementations is critical. It would be very difficult to implement taking a specific object reference and turning that into the global. Because in, for example, in how Node.JS interacts with V8. I have seen that part of code and I’m very aware that whatever is passed as a new Global to the `vm` module it will always get wrapped in a blessed object of the host itself. And which I hinted at before. We are not opinionated on the minimal set of properties that need to show up on a new Global. + +ZTZ: Yes. We believe that everything after this slide is post stage one concerns. We want to share more details to get better feedback and to socialize the detailed ideas further to maybe explain away some of the issues people might be rightfully pointing out. + +ZTZ: So allow me to continue into details of the proposal. First off, when we create a new Global nothing gets automatically evaluated in that global. So it gives the user code a chance to modify it according to its needs. Which serves a bunch of our desired use cases. By default, we copy all properties from the `globalThis` of which it is a global constructor. And some new properties like `global` and the functions and `eval` are going to be included there. And as you can see, the `AsyncFunction`, which here we demonstrate via `getIntrinsic` from JHD’s proposal, it would necessarily be a different reference because it has to internally point to a different global. But the constructor, the prototype behind the `AsyncFunction` or any kind of function would be the same reference on a clean new Global. + +ZTZ: More details. The prototype itself. By default, it would necessarily be the same prototype as the “parent”. The parent must immediately be settable on a New Global. This is crucial for a bunch of use cases. Mostly for the reason of wanting to have sandboxing where especially in the browser, the prototype of a global could be removed so that we can disconnect it from the window events if we so desire. For example, for emulation of other hosts. + +ZTZ: Yeah. So all properties get grafted by default. So if we define a property on a `globalThis` and create a new Global, the property is going to be there and everything will be there and so on. If we use `keys` and specify a single key, this means nothing is copied unless it is on the list. So `y` is going to exist on a new Global, but capital-O `Object` is not. We don’t want to go into figuring out what undeniables exist and how actually undeniable are there to the site of minimal set. If they are undeniable someone’s going to get them. If they are somewhat undeniable, and modify prototypes and constructors, we can block their ability and put them with something else. This gives opinions on any other globals being treated differently except for any other evaluators. So a global gets other than unique evaluators. Although the functions share a global prototype. I feel like I might have said this for the fourth time now. And other unique intrinsic evaluators share the same rule. As far as we can tell, nobody is using these unnamed constructors to construct functions, but they are going to have to be there and be consistent. + +ZTZ: Yeah. Now, we’re going towards importing and the import hook. So by default a new Global will inherit the import hook and module map, which means importing something in the original top level global and in a new Global that was not configured to behave otherwise is going to share instances. Which brings me to an intersection with Content Security Policy that I do care about. The good news is this proposal does not have intersection with Content Security Policy necessary because the intersection was already established by ESM source phase imports, which is currently stage 2.7, which controls how the host can deny evaluation of module sources. So a distinction between evaluating and importing already exists and we don’t need to introduce that. What would be helpful to introduce is to expose a first-class `import` on the global object itself. So if we create a new Global, we do not need to use `eval` to evaluate the import statement to bring in a new module source, instead we can break the rules of a content security policy that forbids eval. And still run manual sources that are not forbidden by that policy. + +ZTZ: More on intersections between New Global and module harmony. They go together very well and the new Global use cases would not be fully realized without the import hook. So new Global has an `importHook`, and the import hook can decide whether it wants to reveal the same instance of a module based on the specifier that is being used in the parent global, or it can create a different one even for the same sources. Yep. And then `importHook` also needs to show up on the `ModuleSource` constructor for the behavior to be complete. This code demonstrates the three options that we have for returning from `importHook` thanks to the integration with `ModuleSource`. + +ZTZ: And this even longer bit is the final explanation, which I believe might be going too far on stage one really. But this is the final interaction between ModuleSource and a new Global. We can have a ModuleSource that uses local static imports and direct eval to import things, and in that case the import hook of the module source is going to catch these. And then using evaluators within that module source will reach for evaluators from the global that we created with the import hook and so it will reach the import hook of the new Global. + +ZTZ: And with that I’m reaching the summary slide finally. You can see that the queue is very long. I will now dictate the summary. + +* We’re proposing to introduce a `Global` constructor with minimal functionality necessary to decouple the concept of global from realm. The proposal is the minimum change sufficient to implement isolation including compartment in the user code. It replaces the Evaluators (Stage 1) proposal, that was earlier extracted from the umbrella Compartment proposal. +* We are requesting feedback, especially on minimizing intersections with web standards. +* We are requesting stage 1 for `new Global`. + +ZTZ: Thank you, over to KKL. + +KKL: And over to the queue. + +CDA: Okay. First up we have KG. + +KG: Okay. I have a bunch of things to say. First off, thank you for the presentation. I think I understand a lot better what the proposal is trying to be than I did just looking at the repository before. And also a small bit of meta feedback. This is a very detailed proposal. I would caution against doing this level of detail at this stage, because we haven’t even, as a committee, agreed this is a problem worth solving. And you’ve spent a lot of effort going into details on a particular solution to the problem. Which like, you are welcome to spend your time on that. But I think that this is probably an excessive level of detail for a proposal at this stage. That said, I have bunch more concrete stuff to talk about. Can you go back to the motivations slide? Because I think that is the core of anything going to stage one. + +KG: Okay. So I think that the broad idea of having a new way of doing evaluation that isn’t in the current global scope, but is otherwise like the same realm as evaluating something, like—running code, so it is not in a separate realm, is an interesting idea. I feel positively about that idea if it is feasible to implement in engines. Although, I guess we have something on the queue later that suggests that that might not even be feasible to implement. But I am very skeptical of basing that on the assumption that people are going to be using `eval`. I do not think anyone should be using `eval` or anything like `eval` like the new `Function` constructor, so I do not like this particular design for solving that problem insofar as this design assumes that you’re going to be using `eval` or something like that. It would be interested in exploring ideas around, for example, evaluating a module source in a different global, but I don’t want the basis of this to be the use of `eval`. Because I don’t like `eval`. And I don’t think anyone else should be using it. With regards to the point here about isolation— + +ZTZ: Could I respond to the first part before I forget? Thank you. So yes, the most desired way to use a new Global would be through `ModuleSource`. And further, we are in the process, I don’t remember what the stage is, of standardizing inline module definitions, which is very exciting to me because it would allow building a bundler that doesn’t use any quirks or tricks or heavy processing of the sources but only wraps them and puts them in one file, and still allows isolating by module source however we want with any number of globals we desire. And that would be the main way of using this, and `eval` is necessary to exist in there. And the good way to give short examples to this committee and it has some minor use cases. But I— + +KG: Sorry, why it is necessary? Why does eval have to exist in there at all, I don’t understand. + +ZTZ: It is somewhat undeniable—but yeah. I’ll take that as feedback. And we will think about making eval optional if possible. That’s an interesting point. Let’s move to the next thing you wanted to bring up. + +MM: Sorry. I want to reply more on the eval point. ZTZ I think you already showed the crucial issue which is on your slide about CSP is proposing a position of a capital `Global` by import. The we’re not saying that you have to use import in order to get the benefits of the new Global. We’re saying that the new Global is compatible even with CSP `no-unsafe-eval` environments where you’re only going through module importing selection. However, the reason why I still believe we need to include the new evaluators per global, is that a tremendous amount of existing code in the world uses evaluators. Mostly `eval`, some `Function`. And for that code itself to execute within the scope of a new Global, the evaluators it has access to, must itself evaluate the code it is evaluating in the scope of that same global. + +KG: I think I’m more optimistic then you are about the possibility of successfully running most of the code in the world without providing use of `eval`. Like a lot of the code in the world runs under a `no-unsafe-eval` context. + +KKL: I will say if we can find a way to avoid, and—if we can find a way that new Globals would be unable to reach eval at all, and instead added to the design for at a later stage the necessity of having an import objects on global objects to kick off the evaluation of trusted sources essentially, that I believe would satisfy the bulk of our requirements. + +KG: Yeah, I guess, the—point that I want to make here is less it is important to me that `eval` not exist, and more that I had understood that the design of the proposal was such that it is only useful if you are doing `eval`. And that’s true unless you’re doing these other things. So I don’t want the proposal to go forward if it is only useful if you’re doing eval. If there are other things that are happening, that make the proposal useful without doing eval, then I’m much happier about it. I don’t want the design being based on the assumption of the use of eval. + +MM: I think that we satisfy that criteria already. + +KKL: Mechanically, taking your feedback into account, this proposal must entrain the addition of a first-class `import` method on global objects so that it could have the possibility of omitting eval on new Globals and thereby not have more paths to eval. I actually kind of really like that gives us a place in user code to deny access to eval to client programs. And we’ll, we’ll integrate that feedback. + +KG: Yeah. To be clear, I don’t want to say that it must include this specific feature. That global import method. I’m not sure that’s the design that I would choose for having a way of evaluation. For example, you might have a `.execute` method on `ModuleSource`s. There are a bunch of different ways you might go about doing this. + +KKL: Right. Our understanding is that many of the questions are settled by previous proposals. The ESM source phase import uses the dynamic impact to kick off execution of module sources and doesn’t need another one. The reason why this proposal has in this example eval to use dynamic import, is because the eval associates is the only way to reach dynamic import at the moment for the new Global. And thereby, yes, introduce some other mechanism to kick off import would suffice. I agree it is too early to say that is the one and only, but there is already one. + +KG: Yeah. + +ZTZ: I’m always happy to talk about other options. If you have any suggestions, reach out to me. I’ll try to work them into this proposal. And compare and contrast. + +KG: Great. All right. That was my first point. I have several other things to say. I apologize to everyone on the queue. Can we go back to the motivation slide? + +ZTZ: All right. I should have learned which one it was. All right. This one. + +KG: So the isolation of unreliable code, I would like to explicitly remove as a motivation. This doesn’t do that. We shouldn’t be pretending it can do that. AI can and will reach for sandbox escapes even if you don’t turn it in a sandbox. Like if we are trying to design something that is actually useful for running AI-generated code, it can’t look like this. AI cannot be trusted to not try to escape a sandbox. So either we need to actually make it safe for running code written by essentially the adversary, or that needs to not be a motivation. + +ZTZ: Okay. I have two responses to that. One of them is very clearly addressing this. So I’ll start with that. New Global with the import hook is sufficient to implement Compartment, which would provide a complete sandbox in which AI code would not be able to escape that sandbox unless, of course, it can come up with a V8 zero-day, for example. But other than that, it won’t be able to escape a Compartment. So that’s point number one. So if that helps, let’s consider isolation of unreliable code a subset of the principle of least authority point in the motivations. Although, this is coming from me personally, I think some people in our wider group might disagree, but I believe giving people a tool that provides partial isolation is also going to be useful. Similarly to how tools that do AST syntax analysis like linters are useful for detecting misbehavior in AI-generated or even untrained human-generated code. And these are helpful. So there are two different motivations. One of them is to eliminate risks with unreliable code coming from AI. The other one is to minimize the impact of unreliable code coming from AI. We can serve both separately. I’m also happy to skip mentioning the other use case in the future if that would be necessary. But the main use case of fully isolating through implementing a compartment on top of new Global is still on the table. + +KG: I think that giving people something that looks like a sandbox but isn’t one is worse than not giving them one because they will trust it when they shouldn’t. Yeah. I’m also a little bit skeptical that this is sufficient to build compartments. But you certainly thought about that more than I have. We can talk about that offline. + +KG: Anyway, I do want to move on to the last thing I wanted to talk about, which is that you have two points here about shimming built-in modules and emulating other hosts. There is already a mechanism for that in the platform, and soon in Node and Deno and Bun, or some subset of those, which is import maps. And I think that generally we should try to avoid redoing things that already exist. So I would like to see this sort of take more from the existing solution or worry more about integration with the existing solution. For example, you know, maybe have some mechanism for instantiating a ModuleSource with an import map or something like that. Rather than to design a separate feature with the same motivation. + +ZTZ: Okay. I believe it is a different way of controlling the situation of wanting to emulate a host or shim built-ins. It lets you do on a different scale and from a different point. So, this motivation is for doing these things at runtime, as opposed to doing them effectively at build time. Because you do need to generate the import map before anything starts. You need to put things in place. There’s no good way to do it dynamically as the program is progressing or do it dynamically in multiple instances within the same realm. I could elaborate. But I can see on the queue we have a point from GCL that this would be useful for Deno’s implementation of node globals. So I would defer to that part of the conversation as soon as we get to it. + +DLM: We have 17 minutes left in the time box. So I guess there are two replies I can just read into the record from JSL plus one not giving something that looks like a sandbox that isn’t. We had this problem for years and Node.JS modules. And from KM plus one also agrees that we should not make things look like sandboxes but aren’t really. And NRO is next. + +NRO: Since a couple of months ago, import maps are dynamic, you can say when resolving this specifier from this file go there. And you can just inject new import map rules like after code release is executing. So not trivial, but it might be possible to do import maps at runtime. + +RGN: Nor generally my point here, import maps are existing mechanisms for shimming built-in modules and emulating or others. And not serving your needs, they are quite limited, I would rather see further development of import maps or making import matches from dual source objects rather than completely designing a mechanism. + +KKL: I think even pursuing that will ultimately bottom out. It is something more complicated than this rather then something similar than this. Import maps entrain I/O. And they entrain the assumption of a URL-based backend which is fetchable over HTTPS. All of them are assumptions that go beyond the pale that we would like to make this expressive of, for example, being able to package an application as a zip file, for example, if that zip file is an asset of the application it doesn’t need to fall through to I/O. And can thereby be more portable between environments with different types of I/O backing their mapping systems. So straitalling with those concerns will end on a hook that is IO agnostic that we proposal. + +KG: I don’t think import maps necessarily imply the use of IO. It does right now, but there are a few pretty straightforward ways it can be refined so it doesn’t assume the existence of IO. Especially if we get things like module expressions. But will cede this point, we have other things to talk about. + +DLM: A reply from MAH. + +MAH: Yeah, we should not design something that looks like a sandbox but isn’t a sandbox. I’m curious to understand better why people think this actually could be interpreted by users as being a sandbox. + +KG: Points on this slide say sandbox to me. + +MAH: I think the motivation is to build sandboxes, I don’t see this providing a sandbox on its own. + +KG: I mean, the reason that I look at this proposal and think this is something that looks like a sandbox is, it explicitly says this is something that’s used for sandboxing. This doesn’t claim to be a complete sandbox on its own, but when that’s the motivation that’s the assumption. + +MAH: Does the `with` keyword, you consider that as being a sandbox? + +KG: No one has ever told me the point of the `with` keyword was to isolate unreliable code. + +MAH: Yet you can build sandboxes out of it. That’s what I’m trying to say here. This is a mechanism to introduce a new evaluation context really, a top-level evaluation context. And you can build sandboxes out of it. This is definitely a goal to be able to more cleanly build sandboxes out of it. I don’t think this proposal tries to pretend this is a sandbox on its own. That’s all I’m saying. I sure would want that this would be made clear for so that users do not understand they can use this just as a sandbox. + +KKL: I would like to frame this as a module system feature primarily. + +ZTZ: It is only a sandbox because we are talking about sandboxing built on top of it. But I believe outside of this meeting, it would not appear as a sandbox to others. Happy to iterate on that. Although, there’s a bunch of distinct topics that we could get to. So if we could do that in a different meeting, maybe within module harmony, maybe an ad hoc one, that would be preferable. + +DLM: Yeah. I think that is fair enough, we are only have 12 minutes left. + +JSL (on queue): we are telling people the `vm` module is not a sandbox for years and people try to use it as such. + +DLM: KG do you want to speak briefly on your topic as well or move on? + +KM: Yeah, if I saw this thing and created a global object isolated from the other thing, I would assume it is mostly designed to be a sandbox or at least it would work as one relatively well. 90% of the time it does and the is the edge cases that both you. That’s what I would expect. + +DLM: Okay. I think we should move on to GCL. + +GCL: Hello. I’m also skeptical of this as an isolation thing, but it peaks my interest in the functionality in Deno, where we currently run isolated modules in a “separate” global to make it appear they are running with the Node variance of certain globals such as like `setTimeout()` being different in Node versus browsers. And so, being able to introduce new Globals as a language feature would make that functionality far more efficient to implement, I would imagine. So it’s, you know, seems interesting from that perspective at least. Just wanted to possibly throw out some other use cases. + +ZTZ: Thank you. + +DLM: Okay. NRO. + +NRO: Yeah, can you go back to the slide where you were showing `Reflect`? Because I was very confused by that. I. + +ZTZ: Almost there. Wait. No, not this. + +NRO: Yes. Yes. No. 11. + +ZTZ: 11. + +NRO: No, not 11. It was maybe 13. Yes, this one. When you say properties are copied, does it mean that the original global `Reflect` is the same as this `Reflect` on a new Global, like the same object entity? Or is it copied in the sense there was a `Reflect` in the original global and it recreates it? + +ZTZ: Yeah. We had more context on how this potentially interacts with `getIntrinsic`, but this is the only one that’s left in the presentation. If both new Global and `getIntrinsic` end up in the spec, we would likely need to figure out a way to have `getIntrinsic` be related to the global in which it is being called so that it gets the right values. This is something to explore beyond stage one. It is on our radar. But this is the only item on the slides left mentioning this. I think we have a small section in the readme of the proposal repository on this. But this is something to explore. Yes. The assumption was `getIntrinsic` would get the right one when you ask for evaluators. I don’t have the details. + +MM: Yeah, I would like to just put in a bit of history here. When `getIntrinsics` was proposed, it was originally proposed as a global, in which case in this proposal, it would be a per-`Global` global in the same way that `eval` is a per-`Global` global. Then when JHD wanted to move it to reflect, we explicitly discussed in committee several times that the implication of putting `getIntrinsic` on `Reflect` is that `Reflect` itself would have to be a per-global so that the `getIntrinsics` of that `Reflect` object could be about the global that was on. The other possibility that I would find more pleasant is to remove `getIntrinsic` from `Reflect`. But I did agree to allow `getIntrinsic` on `Reflect` under the assumption that we’re using here. That there’s a `Reflect` per-global. + +NRO: Okay. Thank you. + +DLM: I think we should move on to MAG. + +MAG: Hello. Basically my comment is, is that—well, so first of all the HTML spec is actually written assuming that realm and global are one-to-one. So if you go through and you’re like, what is the entry settings object? It says, ah, check the realm and from the realm get to the global. What’s the incumbent global? check the realm and get the global. They are assumed to be one-to-one. And I do worry quite a bit that like there’s a lot of unintended consequences if you suddenly now introduce the ability to have multiple globals. And then there’s the implementation challenge, which is that we flat out inside of SpiderMonkey assume that a global and a realm are the same, like have a one-to-one relationship. And there’s lot of places where we use shorthand and we say, for example, “enter the realm of this global object”. It goes, “(sings a jingle), I will check the global”. Or give me the global of this realm. And it says I know what that is, I have a one-to-one relationship. Anywhere we have ask a realm for the global object that kind of gets wonky now. So I do worry, even aside from all of the complaints, I just want to give you forewarning, I think this is actually like a really big lift to actually implement. And that concerns me. So that’s pretty much my comment. + +ZTZ: Thank you for the feedback. Before I yield to MM for more details, I wanted to say that I’m anticipating that the intersection with HTML spec would be, for anything that HTML is doing, it refers through realm to the top-level `globalThis` and that’s the correct behavior. And whichever other global has been created along the way if it has visibility into document and other globals that existed at the top of copying, it will work seamlessly. If we’re talking about a new item showing up in the DOM with an ID that shows up as a global variable, that would not be reflected in a nested global, if I may call it that. And I believe that to be a correct behavior, and I would expect that behavior. I’m open to other opinions on that topic. But I’m more on the hopeful side that yes, some untangling of the relationship between realm and global is going to be necessary for nested globals to behave correctly and their evaluators to behave correctly. But I believe we can reach a specification for the new Global that would disconnect it from all of the web HTML spec concerns. And I’m really grateful for your feedback to date. I’m hoping you will be available to explore further. Now, onto— + +DLM: Just a second, I would like to interject. We have three minutes left in the time box. I want you to be aware of that, if you would like to continue to going through the queue, or ask for stage one, or ask for continuation. + +ZTZ: If we could get an extra 10 minutes I think it would suffice. + +DLM: It would not be today, we have 30 minutes before lunch. And the rest of the topics this afternoon are first. So I believe— + +MM: If no one objects, I think asking for stage one. I mean it is only stage one criteria, they are asking for stage one. I propose we do that. + +KG: Can we be clear what we’re asking stage one for? + +ZTZ: New Global. + +KG: No. + +KKL: The problem statement is what we’re asking for stage one. + +KG: Okay. I don’t love this problem statement either. There’s like various ideas, it’s really not very specific. I guess I’m okay with going to stage one with this problem statement, on the assumption that it means something a little bit more concrete around to specifically, the about to evaluate code in a way that the doesn’t share the current view of the global. + +ZTZ: Great. + +KG: And then you also have the module hooks, the import hook thing, which I didn’t consider it to be the same proposal and I don’t know when they were combined. + +ZTZ: The import hook remains a separate proposal. Although the details of this proposal necessarily overlap with the import hook. So we might consider merging them in the future. We went with splitting into smaller proposals, which is what’s been going on for pretty long time in the compartment world. So we are asking for this specifically. + +KKL: I think that this problem statement closes over both of the designs and that compels us to merge the proposal. + +DLM: So we’re at time right now. There is a point of order. I suggest tabling this for now, and hopefully, we will find time later in this plenary to come back to this. + +MM: Can we just ask if there are any objections to stage one? + +DLM: Fair enough. Any objections to stage one? + +JLS: Not a strong objection, but I would like to see isolation(?) around the wording of the problem a bit more. It just troubling me that is too kind of open-ended. + +KKL: I agree. Let’s refine that out of band to specifically mean the module map, the specific things we wish to isolate that are not intrinsics. + +JLS: I just want to be clear, I’m not wording it as an objection for stage one, but if we can get that cleared during the plenary. + +DLM: Yeah, why don't we come back with a problem statement? We need to move on with the next topic. We will bring the problem statement back and have a discussion later on. I will capture the queue and we can move onto the next topic. + +### Speaker's Summary of Key Points + +* We're proposing to introduce a `Global` constructor with minimal functionality necessary to decouple the concept of global from realm. The proposal is the minimum change sufficient to implement isolation, including Compartment, in the user code. It replaces the Evaluators (Stage 1) proposal, which was earlier extracted from the umbrella Compartment proposal. +* We are requesting feedback, especially on minimizing intersections with web standards. + +### Conclusion + +* After debating the proposal on various levels of details we’ve reached our time limit, along with a conclusion that we need to refine our problem statement to better represent what we are trying to achieve and omit prescribing specific solutions too early. + +(conversation continues) + +## Import Buffer for Stage 1 + +Presenter: Steven Salat (STY) + +* [proposal](https://github.com/styfle/proposal-import-buffer) +* [slides](https://proposal-import-buffer.vercel.app/) + +STY: Okay. Hello world. My name is Steven. I work at Vercel. And on the internet I go by styfle. Today I’m going to be proposing import buffer. I should also mention that GB is one of the authors as of yesterday, so the slides are outdated. Sorry about that. Also, I should mention that I was told that we could go for stage two, originally for stage one and thought we might actually be able to go for stage two. I guess we can talk about that at the end. I do want to call that out. + +STY: So this proposal is built on top of import attributes. We have import `type: “json”`. This is adding a way to import arbitrary bytes. So in this case, immutable `ArrayBuffer` with `type: “buffer”`. You can see there is static usage or dynamic usage here. And this provides us a common way. + +STY: So, the motivation similar to, you know, being able to import JSON, importing raw bytes lets you extend the same behavior to all files and have an isomorphic way to read the file regardless of the environment. One particular case we run into the case a lot, isomorphic tools like Satori, this is a JavaScript library that lets you render HTML to an image and it works in Node.js and other JavaScript environments, but it also works in the browser. You need to pass it, you know, PNG files and WOFF and font files and things like that. So the problem, if you want to use a tool like this, is you have to write code that is specific to different JavaScript environments. Right? + +STY: So you might write something like this. You might check to see, you know, are we in Deno and read a file and go from there, but in Bun we might use the Bun global to get that file. Similarly for Node, we might use `fs/promises` to do a `readFile`. And of course, from the browser, we’re going to do something like a fetch and convert that to an `ArrayBuffer`. And I should mention that a lot of these runtimes are supporting Node’s API because there isn’t a JavaScript standard to do this. So they use an FS read file. But I do think this is still an important JavaScript language feature because we don’t really have a way to do it that works isomorphically with browser and backend or even embedded environments. + +STY: Okay. So, the solution, pretty simple. We can say `type: “buffer”` and with our import we get the buffer back. Now, one of the cool things is once we have this in the language, we can have bundlers optimize this behavior. So you know, bundlers will take a lot of JavaScript input files and convert it into a single file. It might be something like this where it comes across an import for a PNG and convert it into base64 and inline that. And now, you can distribute a single file. + +STY: So yeah. I mean, similar to JSON imports we have an attribute, sorry, the key is `type` and the value `”buffer”` instead of `”json”`. The host must either fail the import or treat it as an importable `ArrayBuffer`. For environments this would be similar to a fetch, if fetch header would be empty and regardless of the content type that the server might return, you can ignore that. We’re just getting the response body with those bytes. And for local environments with like Node.js, etc., this would be similar to `fileRead`. And the file extension would not impact it. It would be ignored. Again, we just care about those bytes. + +STY: Prior art and see, how, how are people doing this today? Right? So we have, Deno just shipped `type: “bytes”` and it looks pretty similar. There is `Uint8Array`. They expressed, I believe there is interest in changing to `ArrayBuffer`. Webpack, as an asset loader and, this is commonly used for like importing PNG and inlining it, Moddable SDK, I guess is an embedded JavaScript runtime. They use this resource class to grab an image and have that embedded into ROM. And then, you know, parcel also has a similar feature using data URL. I believe Bun even has, I believe they use `type: “text”`. I don’t know if they have a general import yet. But they might be interested as well. + +STY: Okay. So, why are we suggesting immutable instead of mutable? Or rather why not mutable? You may need multiple copies of the buffer in memory to avoid conflicts between different import types. If you have multiple modules importing the same buffer this can cause detachment issues. So, for example, `postMessage`. So there is detachment issues if it is immutable and you want to do a `postMessage` or `transferToImmutable`, you might have two different modules that are importing the same module specifier. So immutable solves that. And then similarly, I mentioned Moddable SDK and embedded systems if we use immutable they can rely on ROM instead of RAM so it is beneficial for those environments as well. + +STY: This proposal started out with `Uint8Array`. We got feedback that has kind of been mentioned we want to go back to this. But the reason why it is—`Uint8Array` for now, is that UInt8Array is just one type of underlying `ArrayBuffer`s there. Is many types of arrays. So, `ArrayBuffer` seems like a primitive that you can, you know, add whatever view you want on top of it. And then why not Blob? Well, Blob is part of WC3. It is not part of JavaScript. It doesn’t seem appropriate for a TC39 proposal. Similarly, something like `ReadableStream` wouldn’t seem appropriate either. And also just adds more complexity if you wanted to read a stream rather than just having the buffer already in memory. + +STY: So in summary, we get isomorphic file reads across all JavaScript environments. Its going to make this a language feature. Reduce our boilerplate code. We get bundler optimization opportunities, so bundlers can inline it. And we get to take advantage of memory constrained environments and put it in ROM. Yep. That’s it for me. Thank you so much. + +DLM: Okay. Let’s go to the queue. First off we have, it looks like a clarifying statement. + +GCL: Yeah. Hello. I just wanted to clarify Deno did not ship that. It is still an unstable feature because it is not specified anywhere. So. + +STY: Yep. It makes sense. + +DLM: Next, NRO. + +NRO: Yes. First, thanks for this proposal. I like it. Yeah, thank you for it. About `UInt8Array` versus `ArrayBuffer`, I would prefer if we used Uint8Array here. The original proposal had immutable array. It was determined that this should be immutable. I think it is perfectly fine to have Unit8Array baked by a buffer. But there are a few reasons for me to prefer a Unit8Array, one is working binary data in practice most common cases working with bytes and Unit8Array is the representation for bytes. If you need any other type of TypedArray, you will need to basically do a conversion step from Unit8Array to the other type of array. But you need to conversion step anyway, you need to start with an `ArrayBuffer`. And also, at least on the web, like the web has W3C guidelines for new APIs. And web recommendation is that every API that exposes generic binary data should do it as Unit8Array, so for the common case there is one less conversion needed. And exposing `ArrayBuffer` is considered like a common case. + +STY: That makes sense, I would be curious to hear other thoughts as well. I did have this as a UInt8Array. And there did seem to be pushback. So I’m curious if anyone else has thoughts on Unit8Array versus `ArrayBuffer`. + +JSL: I’m in the queue if you don’t mind me jumping. It relates to this. In Cloudflare Workers we had this ability to import data module for a while now. It does import it as an `ArrayBuffer` and it is mutable. Ideally, like if I could go back in time to do that from the start, it would have been immutable if that were available and we would have done it as a `UInt8Array`. So definitely think that leaning towards that is the better option. + +DLM: Okay. Anyone else, KG with support from Unit8Array and a message. KM, do you want to speak. + +KM: Sorry, no, I prefer the Uint8Array. + +DLM: Sure. Also from KM then, and next up is MM. + +MM: So I think I was one of the people that objected to Unit8Array. I retract my objection. I’m happy with that. + +DLM: Yeah. MF. + +MF: Thank you for this proposal. I really like the motivation here. I think this is good. And I also really appreciate that we’ve seen that there’s precedent for this already in experimental implementations, which is very compelling. I do want to question, though, the conceptualization here, which includes like the placement and how it is being done. I know that, you know, there’s already this precedent that is probably a good idea to follow. But you know, I feel like this isn’t really a module system feature. You know, we are not importing a thing that would be eventually used as a module and loaded as a module. And the example you had is a resource loaded as a bitmap. And other things that are not JavaScript. And you know, the feature is like using module specifiers, assuming they are, you know, allowed to be any resource identifier which is commonly the case in many implementations. But it just seems like a little bit shoehorned in. Is the thing we’re looking for more generic IO capabilities? And it seems like it is. It seems like that’s what we want. Does that make the module system the appropriate place to put it? Probably this is not going to change anything, but I did want to bring that up and see what people thought about it. + +DLM: NRO? + +NRO: Hi. Yeah, I mean, we already have JSON modules that are the same thing. There is a lot of use in having data that definitely replicates to be defined as the model system it makes the match easier to static, and it makes it easier to more easily fetch the things, you don’t have to wait for the module graph to start loading the resource. You can just upload it at the same time. But yeah, this is exactly the same mode, it is just modules. And JSON modules on the web, they are not part of the module graph, they are just loaded basically and then you can inject in the page. + +KKL: I think where this defers from a generalizability from AI. It is a figure and allows us to package boundaries in a way a generic IO feature would not be able to. In particular, if we did, having a generic IO capability would allow us to solve the simplest case of an application package being able to refer to its own assets or library package being able to refer to its own assets, but we would lack an ability to specify—specify assets that transit package boundaries or scope boundaries in the parlance of import maps. And this being a module system feature allows us to express that in a way that is both AI agnostic and also packaging agnostic. It makes it possible to express an application that depends on assets that cross packaging boundaries in a single way that is portable across a bundling step without modification of original sources. This is obviously going to be, this is obviously already useful for JSON, it will be useful for other kinds. It is explicitly my hope that the prohibition of using import directives within imported CSS modules gives us an opening so in the future CSS could participate in module resolution as well. + +DLM: Okay. GCL? + +GCL: Yes, I also feel this sort of instinctual hesitation towards things that sort of look like IO capabilities in the module system. But I sort of approach it from the perspective of like we don’t really have, you know, the alternative methods that other languages, especially compiled languages, have, you know, include bytes in ROS. So having the similar sort of analyzable way to do things sort of makes sense to me. Even if it is kind of like—yeah, not, not like—the most esthetically pleasing at first glance. + +DLM: ABO? + +ABO: Yeah. So as has been mentioned previously, the HTML spec adds CSS modules and although JSON modules are defined halfway in, like—across both HTML and TC39, it is possible, but if we do not want to define this here we could define this in HTML and then say that WinterTC-compatible runtimes should also support it. So it would still hopefully achieve the goal of being available in most places that matter, like—the goal for WinterTC is all server-side runtimes should implement the common API. And although, we explicitly do not cover things like embedded runtimes, there is a conversation to be had there that could be useful as well. + +DLM: Okay. KG? + +KG: I think a way this is importantly different from the general I/O capability is that if two different modules which don’t know anything about each other import the same resource there is only one fetch. That is a property that is true of modules and not generally true of I/O, so it does make sense to consider this part of the module system, just because it has that property and the property is important for people to be able to rely on. It is a bit of a kludge, but eh, it seems good. + +DLM: Okay. KKL. + +KKL: Briefly, I forgot to mention, I untroubled by the fact this would allow you to use the host as an I/O mechanism, specifically because we address that with global and import separation. + +DLM: Okay. JSL? Oh, a message. Okay. Plus one, I don’t think it has to be done here. WinterTC is a good option for standardizing buffer. JSL did you want to speak more on the Workers or was that covered by your earlier comment? + +JSL: I think it is covered. But a little bit of background on the I/O point. The reason we added it, this predates having any I/O mechanism at all in Workers, we added it because we didn’t have that option. I’m actually implementing a virtual file system in Workers now, that we might have been able to do it that way. But at this point, given the two options I actually like this better for the use cases that we have. So—yes. I can, I get that it does kind of bleed that line a little bit. But I think the use cases and, I think the simplicity of this kind of outweighs it. That’s it. + +DLM: Thank you. EAO. + +EAO: I like this, with the observation that I think so far in my entire history of using JavaScript, whenever I needed to do something like this from code, I have been importing text rather than bytes. So if or as this kind of makes sense, I think we should also be adding `type: “text”` or something similar to import text assets into JavaScript with the same sort of idea as this proposal, with the obvious type for that would be a string, I strongly feel like we really ought to support only UTF-8-encoded text with that. With type: buffer you would, of course, be able to decode the string value out of that. But we should make the UTF-8 text import less clunky. + +DLM: Reply from KG. + +KG: I believe that wanting text is very common, but I think this is proposal is well-motivated on its own, there are a lot of good examples and a lot of examples I have from my own experience, and I think text would be a different proposal. I would be supportive of such a proposal. But I think it is a separate thing. + +DLM: We have a plus one from KKL and the same from MF. And then, next up is KKL. + +KKL: Yeah, I have a mild preference and gratefully hopeful we can prefer on type bytes with a Uint8Array backed by an immutable ArrayBuffer I think that is the least controversial selling point. + +DLM: Thanks, KKL. JHD. + +JHD: Yeah. I think the use case is great. I like it being an U8 backed by an immutable ArrayBuffer. I think we will be spending a lot of fun time bikeshedding the value of the time. Buffer, however that is spelled. But I think that is something solidly and should be done in stage two. + +DLM: Okay. Plus one I love it, from MM. And next we have KM. + +KM: `type: "bytes"` makes more sense. But stage one on it. + +DLM: Great. So we already heard a number of people that specifically supported stage one. I think it makes sense to call for consensus on stage one. Is anyone opposed? Okay. Great. Congratulations, Steven, you have stage one. And then we have NRO with a topic after the consensus call. + +NRO: Yeah, maybe first STY wanted to see if he can get stage two, I think. + +STY: Yeah. Admittedly I didn’t even realize it was a possibility until a couple of days ago. So I didn’t look up the prerequisites to stage two. But yeah, I think it is worth bringing up. + +DLM: Cool. Okay. In this case, what do people feel about stage two? Plus one to stage two, but not strongly from James. Be nice to hear at least one other person that explicitly supports stage two. Okay. JHD is also behind stage two. NRO also is good with stage two is anyone opposed to stage two. MM also strongly supports stage two. And DJM. + +CDA: I did not check. Do we have spec text with all of the major design flushed out and syntax. + +NRO: There is a bug in the spec text. It is almost an editorial bug. Everything is good there. + +CDA: Everything major is there in the spec? + +NRO: Yep. + +DLM: MM, did you want to speak? No. Okay. New topic from EAO. + +EAO: I figured I’d try my luck. So can we have stage one for this proposal but replacing `”buffer”` with `”text”` and returning a string, presuming that is UTF-8? + +DLM: Okay. MF. + +MF: No, I’m not comfortable with going to stage one on a proposal I have not seen. I’m sorry, in my head I have an idea of what we want for `type: “text”` and I’m supportive of that generally, but I would like to see that written down. + +EAO: But if you are supportive of that generally, doesn't that kind of mean stage one? + +MF: The word “that” right there is doing a lot of work. + +EAO: Got it. I will bring this back later. + +KG: I think you can reasonably try to go to stage 2 or 2.7 if you have it spec'd. We don’t need to go incrementally. + +EAO: Especially if we can copy/paste this one. Or as we can. Yeah. + +DLM: I think I mentioned the plus one from DJM for stage two. NRO did you want to talk. + +NRO: I have a quick request. This proposal, the text for this proposal is very minimal, and most of the semantics would be in HTML, how the fetching of the bytes actually works, how it interacts with HTTPS headers. So before going to stage 2.7, I think we need to have a complete request for HTML that the browsers sign-off. Like basically, we could ship the HTML while getting to 2.7 and then like finishing it on our side. + +DLM: Okay. I guess at that on my end, we can break exactly one hour for lunch. Thank you, everyone. + +### Speaker's Summary of Key Points + +* Change imported type from ArrayBuffer to Uint8Array (still backed by immutable ArrayBuffer) +* Change { type: “buffer” } to { type: “bytes” } + +### Conclusion + +* Import Bytes has Stage 2 + +## Continuation: How to make thenables safer? + +Presenter: Matthew Gaudet (MAG) + +* [proposal](https://github.com/tc39/proposal-thenable-curtailment) +* [slides](https://docs.google.com/presentation/d/1_RCnI7dzyA1COgi_Ib7GkmHioV1nVSEMmZvy0bvJ8Kk/edit?slide=id.p#slide=id.p) + +MAG: So the first one was a reply from KM about “wouldn’t it be a cache in the shape of an object?” + +KM: Sorry. I think I have forgotten the context. + +MAG: Basically the question is whether or not we would actually be okay with storing the internal slot, you know, “internal proto” and whether or not that would be okay? And you were like, yeah, it is probably just cashed in the shape of the object so I think it is fine, I suspect what you were going to say. + +KM: Yeah, maybe, I have kind of forgotten the context a little bit. Sorry. This was about—oh, oh. Sorry. Yeah, that slot. Yeah. Like whether it was a built in or what was the—I forgot what was the reply was to. + +JRL: So, how do engines feel about the extra bit on every object. + +KM: Yeah. It seems like that would sit inside of your internal VM shape and then you would just, that bit wouldn’t be, like actually on the object itself. But—on the other hand, these objects are all basically singletons already, I’m not sure it buys you that much shape with singleton in the object. + +KG: I guess I was worried about if this is something that would increase the memory for unrelated objects? + +KM: I don’t think so. I think you would probably put it in either some static data that like lives, I mean, for us, it would probably be in what we call the class info that like lives in like text. In your binary. So it wouldn’t actually make the object bigger, but, there would be perf issues because of that maybe, but I doubt it. + +KG: People were proposing that user with be able to create more of these objects. So it couldn’t be completely static. + +KM: Then it would have to live in the shape probably. + +CDA: I’m not seeing it on the queue which is fine. There is a replay from MAH, `Symbol.unthenable`. + +JRL: No. It was deleted yesterday. I have it removed. + +CDA: Oh, I’m sorry. + +NRO: Yeah. Maybe KG also just mentioned it, but it is important userland objects is able to abstain in this, the reason is, yes, there are built in objects like objects defined by whatwg spec that are implemented in JavaScript. It would be great to make it possible for those objects that are like, yes, they are not like actually user objects, but still user objects from the point of view of the engine to obtain properly. Personally I think it would be nice if it was possible to obtain objects even if they were Defined with the class syntax, which means for the object and then the obtain. But maybe it is fine to say, well, no, you must do it in class syntax so you can adjust it to `traded objects for this stream. + +CDA: All right. There was just a note from MM asking to bring this to TG3 for further discussion before presumably before the next plenary. It says between plenaries. As, go ahead, I’m sorry. + +MAG: I’m sorry, I’m fine with trying to make a TG3 and see what we can hammer out there. I did, I guess my big question here is from yesterday’s discussion, or, goodness, was it yesterday? Yes, yesterday. From yesterday’s discussion, it kind of sounds like while people appreciate the idea of the extra ticks for certain, you know, nice properties, basically, the performance impact rules it out and those who prefer the extra ticks would rather have something that works and so probably I should continue pushing on the internal slot idea. And I just wanted to like make sure that the internal slot actually has a hope of making forward progress before I go off and start actually pushing on it. + +MAH: Yeah, I was, sorry, I’m not on the queue, but I’m also wondering if it is worth keeping exploring your second option, which is a mechanism for hosts to explicitly be more robust for this class of issue. + +MAG: I’ll review my own slides. + +MAH: So, again, whether, do we need operations like a get—like a safe promise resolve or a get—safe promise capability? + +MAG: Right. And then the idea would be that embeddings who care about this, or people who care about this, would deploy that instead. Yeah. I think that is worth having a longer discussion at TG3. And I’m willing to spend a little bit more time on that. But my gut still says that the `[[InternalProto]]` flag will be the most successful on this direction. But—I like solving problems with more generality if possible. And then my one concern about the flag one is then we have to name it. And—yeah. I hate the idea of trying to name this. + +OFR: Yeah, unfortunately, I don’t have the presentation really paged in. But you were discussing this needing more memory, I would be concerned about it making lookups slower, because we would have to check in multiple places if a property is defined, would that be the case or am I remembering this wrongly. + +MAG: So the hypothesis here basically it would be a new abstract operation used only during promise resolution where it requires doing look up, up to a point. So you will walk the prototype chain, but if a prototype has a certain property, AKA has an `[[InternalProto]]` slot or flag or however it is represented in the engine then you stop. And you say, nope. No property. The rest of the promise algorithms continue. So I actually don’t think it should be any slower and if it is slower it is slower by a very small amount and only for promise property look up. So it is a very targeted fix. + +OFR: Right. It would only affect the property with the name `then`, or in general any kind of look up. + +MAG: We would write the abstract operation it could use any property. But in practice the only deployed version of it would be for `then`. Yes. + +OFR: Okay. + +MAG: Potentially `constructor`. There is a different conversation thread I opened for an issue that is slightly germane, but slightly off topic for this conversation here. + +OFR: Okay. + +CDA: JRL? + +JRL: So from discussion yesterday it seems there’s, the core CVE issue, which we should be actually trying to fix. The core CVE is because we’re treating spec created structures and trying to resolve the promise capability with those structures and sometimes they have a then and we actually don’t want to treat those `then`s. I don’t think that requires a slot in order to fix. It just requires us to rewrite the way that we define these algorithms. There is also the discussion making thenables safer for user end objects which also does not require an internal slot in order to define. It just requires us to define the way we access the `.then` property in a way that doesn’t make every object slower. That was my point yesterday. So I think, I don’t think doing a flag is not the right approach for this. There are two things that we need here. There is making the small sites that require this in the spec safer. And then there’s the separate argument for making thenables safer from userland. + +KG: How do you do this without the flag? + +JRL: It just requires a different wrapper. So much capability, the one I remember off of the top of my head is doing module resolution and the `Object.prototype.then` is defined because of HTML tag. We don’t define any, it resolves to the thing we know exactly what it is. It doesn’t try to adopt the state of the thenables. + +MAH: That’s exactly what I said, safe operations to have a safe promise capability and a safe promise resolve so the specks can use what it uses and not supposed to have those objects. + +KG: Yeah. Those are kind of different suggestions. The safe promise resolve we were talking about it would still adopt the state of the thing but in a later tick. And what JRL is suggesting is the proposal to make certain algorithms not respect thenables. Is that right? + +JRL: Yes. + +KG: Okay. I don’t think there was much appetite for making certain things not respect thenables the last time we talked about this. But maybe there is. + +MAG: Like my one concern with that, I’m particularly interested in taking this and then I have to extend it onto, you know, WebIDL as a spec. Because a number of the cases I’m thinking of, the actual problem is not an EcmaScript-defined spec object that gets resolved, it is a WebIDL object. So WebIDL takes a value and converts it to a JavaScript when you resolve it. The definition it as object prototype as the prototype, or on its proto chain and it is exposed and can actually be a thenables. Why I’m slightly more interested in the flag direction because I think it layers better into into WebIDL. I haven’t thought that deeply about the actual mechanism of doing, like an algorithmic change though, so I’m open to being convinced here that I’m wrong. + +CDA: MAH. + +MAH: I just spoke. Basically I want to reiterate, I think JRL and my suggestion is very similar it is just how we handle the values that is a slight different. But in general, they are both based on fixing the specs and how the specs handle these resolutions instead of, changing the base objects on which these resolutions may work against. + +CDA: JRL? + +JRL: The WebIDL example here, WebIDL, I think, either calls promise capability directly or it uses promise resolve which is essentially we just need to change those two definitions that the WebIDL is calling to use a safe variant instead. Like we define the abstract op that they call and we can just define a safe abstract op for them to call, instead of them using the old thing that is unsafe in the example they call the new thing that is safe. But it still doesn’t require us to add a hidden slot to every single object that exists in JavaScript. + +MAG: So right now it basically is just new promise capability and then calls `PromiseCapability.resolve`, you’re saying it would be like new promise safe capability, and then call SafePromiseCapability.resolve. + +JRL: Yeah. + +MAG: Okay. That sounds plausible. This is not a stage two or anything proposal. Like we should just talk more about it and if you want to open an issue that we can concretes it a little bit, I’m totally fine with that. + +KG: JRL, would we expose this capability to userland so it could be polyfilled? + +JRL: With userland here is you would… + +KG: No, right now you cannot have a promise that holds an object that holds a then property. You cannot do that. (edit: turns out to be false, see [tc39/proposal-thenable-curtailment#5](https://github.com/tc39/proposal-thenable-curtailment/issues/5#issuecomment-3145520373)) This would make it possible to create such promises, but only from certain spec items. + +JRL: I need to think about this more. + +KG: I’m not opposed to having this capability exist in userland. I’m just wondering. + +MAH: For userland, I think the minimum scope there. But I think maybe we should have it, as a proposal—and the minimum scope might be a simple promise that is promise check that doesn’t do anything else so you can use whatever userland wants to do in that case. + +KG: That doesn’t let you do the thing that this proposal lets you do. Like, you could not implement—just having `Promise.isPromise` does not let you create promise that has a then property, and this thing that we’re proposing does that. + +MAH: To be clear, I would be opposed to a promise that resolves into a thenable. + +KG: Well, that’s what JRL was suggesting. + +MAH: Internally maybe, but I would not want that for userland. I would never want a native promise that results to a thenable in userland. + +KG: Okay. Well the thing that JRL is suggesting would make the spec impossible to polyfill. + +JRL: The other approach here with the internal slot that says don’t respect this object's `.then` and I define `Object.prototype.then`, whatever is calling the API resolves it with the object, I have still done the same thing, that holds a promise that holds a thenable. I don’t think there is any solution here that avoids the problem of a promise holding a thenable. + +KG: Yeah, that sounds right to me. + +CDA: MAH. + +MAH: Yeah, no, I’m thinking through this. No, I spoke already to my point, but I don’t know how to answer, JRL, you’re right. That seems to point towards more MAG’s solution, most, first proposed solution to make prototype exotic and prevent it from ever becoming a thenable in the first place. + +CDA: NRO. + +NRO: Yeah. I like in principle JRL’s approach to have like a different AO that we are calling to. But could we be like instead of AO being done in a way that just returns to the user the promise that contains the thenable, could it either be, like a spec internal promises and then the, and then like—when we actually expose the promise to the user at the end of the spec algorithm add one tick to actually resolve the thenable. So that there is one extra microtick in those cases for web APIs, not general for the temp. Only for web API that use this special safe call and only one extra microtick when the algorithm resolves and one extra tick. + +CDA: KG? + +KG: Yeah. Just with regards to making `Object.prototype` exotic. I do kind of like that idea, but I want to note that not all of the CVEs would be fixed by it. Sometimes you are resolving with a spec-created object that doesn’t inherit directly from `Object.prototype`, it inherits from animation or something, which is why it is nice to be able to do this with more than one thing, other than just Object.prototype. + +CDA: The queue has been drained. + +MAG: All right. My conclusion that I’m going to write down is basically design continues in pace. People should bring some issues to the repo. I would like to have some longer form conversations about these particularly written, because I’m bad at thinking on my feet like this. And then I will try to bring this back at a later date. And hopefully, come back with something maybe a little bit firmer. Maybe something with spec text and some suggestions and depending how much time I have, maybe even a prototype, we’ll see. Sound good? + +CDA: Great. Thank you, MAG. + +### Speaker's Summary of Key Points + +* Resolved a conversation about whether or not engines would support the `[[InternalProto]]` design; seems unlikely to be too challenging for engines to implement. Discussed a bit of performance impact, suspect it to be low. +* Reiterated the desire, if we were to follow `[[InternalProto]]` that there’s a userland mechanism for this—Deno and other hosts implement Web specifications in JS in a way that would likely desire the internal slot to be set. +* It does seem worth considering other resolve algorithms to be deployed at various points. +* Some discussion of the definition of thenable and if we’re willing to change; some post-discussion discovery that you can have a a promise hold a ‘then’ property with some finagling right now. + +### Conclusion + +* I think we’ve eliminated the most general form of extra ticks approach from the running. Still some possible designs to explore: Justin proposes that we could explore something akin to `NewSafePromiseCapability` and `safePromiseCapability.[[Resolve]]` which handles this. There is some concern about making sure we don’t leave behind promises with `then` as the contents of a promise which is a danger. +* I would encourage issues to make sure we can follow up on this, and I’ll see if I can’t attend a TG3 meeting to keep pushing this forward. + +## Continuation: [Keep trailing zeros in `Intl.NumberFormat` and Intl.PluralRules](https://github.com/tc39/proposal-intl-keep-trailing-zeros) for Stage 2 or 2.7 + +Presenter: Eemeli Aro (EAO) + +* [proposal](https://github.com/tc39/proposal-intl-keep-trailing-zeros) +* [slides](https://docs.google.com/presentation/d/1hKJFrDfiGeqPWm51fQFQb4M4CeYm3ultB7Opef1BVuE/edit?usp=sharing) + +EAO: Here. So, this is where I left off yesterday at nearly reaching 2.7. WH mentioned a blocking concern that he wanted to discuss during the Amount presentation. And then after we assigned the stage 2.7 reviewers to be RGN and SFC, Shane completed his review of this. So I think— + +JRL: Did you freeze? + +CDA: I can’t tell if EAO froze. He looks frozen. You are being frozen or you are very still and very concerned. And we lost him. All right. + +JRL: Is sounded like he was saying SFC has approved. Oh, here he is back. + +EAO: Sorry. I lost my network, I had to switch to my hotspot. Sorry about that. Yes. I was saying SFC has completed his review of the spec. So I think we might be ready to ask for 2.7. Before I do that, I think the right thing to do is check with WH, who I hope is still on this call, whether I’m correct in asserting that we can address the issue within *ToIntlMathematicalValue* that you identified separately. That is specifically about the behavior of what’s around step 12 where we use the `RoundMVResult`—result to then assign specific values that are too big or too small to positive or negative infinities or to zero. WH, is this the case or is there still something that we ought to address within this keep-trailing-zeros proposal? + +WH: You can do this as a separate change, I’m fine with that. My main concern is that we fix this bug quickly. + +EAO: Yes. SFC has opened an issue on Ecma 402 for us to track, revisiting the limits on the *ToIntlMathematicalValue*. We aim to do so separately and in parallel with any of the other work that happens to touch *ToIntlMathematicalValue*. But as you might note here, the line here, that is causing the issue is not being touched by this proposal at all. The behavior on that path is unchanged here. + +EAO: There is a new topic, SFC? + +SFC: Well, you already presented the issues. So that is all I needed to say. + +EAO: Excellent. If there’s nothing else on the queue, I would like to ask for stage 2.7. + +WH: Sounds good to me. + +CDA: Okay. WH supports stage 2.7. And NRO on the queue. + +SFC: I support. + +CDA: SFC as well. Have any objections to advancing to 2.7? All right. You have stage 2.7. + +EAO: Excellent! I think that is it for me unless there was something else procedural I should be asking here for next. In case there isn’t, thank you and good-bye. + +CDA: All of the reviews are complete. Right? + +EAO: Yep. + +SFC: Yep. + +CDA: Good. Nothing else. + +### Speaker's Summary of Key Points + +The reviewers (RGN and SFC) have completed their reviews, and WH’s previously blocking concern regarding *ToIntlMathematicalValue* was moved to a separate discussion. + +### Conclusion + +The proposal reached Stage 2.7. + +## Continuation: Module Import Hook and new Global for Stage 1 + +Presenter: Zbyszek Tenerowicz (ZTZ), Kris Kowal (KKL) + +* [Module Import Hook proposal](https://github.com/endojs/proposal-import-hook) +* [new Global proposal](https://github.com/endojs/proposal-new-global) +* [slides](https://github.com/endojs/proposal-new-global/blob/main/slides/stage1.pdf) + +ZTZ: All right. So, yeah, we’re going back to the continuation of what I discussed around globals and the revised version of the problem statement is: + +* A way to evaluate a module and its dependency in the context of a new Global scope within the same realm. + +And we’re also declaring for stage one that we’re tentatively updating the proposal name to proposal-module-global. With that, I would like to ask for stage one. + +CDA: OFR? + +OFR: Yeah, I was just wondering. There was a discussion before also about considering whether the stated goal is also achievable with existing mechanisms in the standard. And so, I was wondering if that maybe could also be reflected somehow in the problem statement? + +ZTZ: I believe the problem statement reflects the proposal being different from ShadowRealm. If you were referring to import maps, they do not solve the entirety of problem statement because it says context of a new global scope. They could be interpreted as an optional component in achieving its dependencies part, although, this is post-stage1 concern. + +CDA: JHD? + +JHD: Yeah. So stage one sounds great. But the implication of this problem statement is that there’s more than one global scope in the same realm. There was actually discussion on different topics to suggest it may not be at all viable to have a realm and a global not be one-to-one. So it might be helpful to tweak the problem statement and say stage one, but like with the condition that the problem statement is tweaked so as to not imply that. In other words, if that’s a viable path forward, great. But if that is not a viable path forward, then it is not clear this has gare to go. + +KG: I mean, I think the proposal just dies in that case. + +KKL: I agree, if this problem statement cannot be achieved on the reality, on the underlying reality of the web platform in particular, then the proposal would die. We do have, we are optimistic that, that can be addressed though. And look forward to speaking With implementers, and investigating ourselves. + +JHD: That’s fine, I just want to make sure that like eventuality was called out. + +CDA: NRO? + +NRO: Yeah, I’m happy with the updated problem statement. Just a question, it is okay for you to focus on the modules because with an eval it also covers scripts, right? + +ZTZ: Yes. + +KKL: Yes, that is expressly why we’re bringing the proposal. We are doing global of for scripts because we have a mechanism of with and direct eval and proxy, which is able to do most of what we would be able to achieve with this, except for the fidelity of our module system and that we have to do, and we have to entrain Babel to link, in order to support ESM. Yes. + +MM: Yeah, just to further clarify in response to NRO’s question. There’s various infidelities, various ways in which we cannot do it for scripts. To a point of performance with the normal language. Such as import expressions, such as if there is a subset defined as the top level and invoked as a function what does it see for its value. There are weird things where we just can’t get it write by building our script evaluations using proxies it all of the rest of the things we’re doing. + +CDA: Sorry, MM, you were kind of trailing off at the end there. + +MM: Having said all that, I think it doesn’t bear on what we’re asking the committee for, I just wanted to clarify that we can’t quite do it for scripts with existing mechanisms. + +MAH: But it is also something we don’t need to explore in this proposal. It would be interesting to do it in this proposal, but it is not a requirement. + +MM: Agreed. + +ZTZ: Okay. + +CDA: All right. We have a few replies in the queue I will read. First one + +JSL: I have strong doubts on the proposal as is. Stage one is fine. Expect a higher bar for stage two. MAG is also echoing JSL. While concerns I think we can explore for stage one. KM is plus one on echoing stage two concerns. Okay. All that being said, I’m kind of—to do this, but I apologize I didn’t know we had a queue saved from earlier. I’ll just briefly readout the comments and if folks want to hop on the queue to talk about these more than you can. MM had a comment. + +MM: I’m sorry, before we do that, do we have stage one? + +CDA: Let’s, if we want to formally do that before we go back to the previous queue, that’s fine. Do we have support for stage one? I believe. + +MM: Many people expressed support on TCQ. + +CDA: That’s right. Sorry, I was just looking for the notes to conform. Yeah, there were definitely three or four people. I don’t believe that we have an objections to stage one. Wait a moment for anyone to chime in. Lots of concerns expressed about stage two. But I believe you have stage one. + +MM: Great. + +ZTZ: Thank you. + +CDA: I just wanted to quickly scroll up to the screen capture. There’s some replies from MM and KM on something, MM’s comment is about—policy. And— + +ZTZ: I remember this is in the context of overlap with globals in the browser and the HTML spec defining certain globals and their behavior in the context of global being a single thing on the realm. And— + +CDA: Yep. + +ZTZ: This was definitely a stage two concern, which we concluded during the lunch break is that we don’t really want any other global then the one top realm global to be concerned with any of those overlaps. And we will proceed with looking for a solution to that. + +CDA: That sounds good. Yeah, a couple of the other comments I think we’re already covered. About stage two concerns. NRO has a comment about different problem statement, I assume this satisfies these concerns from NRO. + +NRO: Yes. + +CDA: And the last one, I don’t know if we covered this one, KG had importHook cannot be on ModuleSource if it is to its job. + +KG: Yeah. I have an open issue about this, but ModuleSources need to be able to be instantiated in workers, that is a different thread, you can’t have them carry things on the main realm that is in a different thread, that just can’t work. But this is a design question for some later time. + +KKL: And we anticipate that and our expectation is that the hook would be left behind when transferred. + +KG: Okay. My preference would be to do something other than that, like to have a new type or something, I don’t know, but we can worry about it later. + +CDA: All right. Just so I’m clear, potentially updating the proposal name, this is renaming what is currently new Global to module global. + +ZTZ: Yes. That’s the intention. + +CDA: Yep. All right. I believe we have stage one. + +ZTZ: Thank you. + +CDA: No other comments in the queue. Next up, we have, oh! All right. So, I got a couple of requests here that I haven’t had the opportunity to get into the schedule or to PCQ, which I will do in a second. One is a continuation on non-extensive applies to private in other, why don’t we stop there. + +### Speaker's Summary of Key Points + +* Presented an updated problem statement: “A way to evaluate a module and its dependencies in the context of a new global scope within the same Realm“ + +### Conclusion + +* With the new problem statement and clarification to the proposal scope the proposal was approved for Stage 1 +* Tentatively updating the proposal name: proposal-module-global + +## Continuation: Non-extensible applies to private + +Presenter: Olivier Flückiger (OFR) + +* [proposal](https://github.com/tc39/proposal-nonextensible-applies-to-private) +* [slides](https://github.com/tc39/proposal-nonextensible-applies-to-private/blob/main/no-stamping-talks/non-extensible-applies-to-private-update.pdf) + +OFR: Yes, this will be a quick thing. Let me share this page (https://chromestatus.com/metrics/feature/timeline/popularity/5209). There was some worry about this uptick here [in June]. I looked into how is this graph generated? And I think this is more of a presentation issue of the data here. So basically what is going on here is that this is counting incidences where the property occurred in a, in a dump over at httparchive.org. So this shows relative occurrence per month. This uptick has more to do with how many sites are actually indexed in that particular month. So it’s not really comparable months to month. Especially with these low frequencies. So I looked at the absolute numbers and there are something between six and 10 URLs in the whole of httparchive where this counter actually triggers. That is just the update, that I think this uptick here is really to be ignored. Because it is essentially noise. + +MAH: Thanks so much for clarifying that. + +OFR: Yeah, now, I’m not exactly sure about the process. Can we still try to advance the proposal? Or where are we at exactly? + +MM: So we can’t advance the proposal all of the way to stage three because of my not having put together tests in time. However, if there’s anything to be gained by doing so, we could ask for conditional advancement, but I don’t see what is be gained by doing so. I will just wait until the next meeting with tests in hand, OFR, thank you very, very much, between OFR’s contribution on this, and NRO’s contribution on the Babel side, the only remaining issue I’m aware of is tests. So if there are any other concerns that people have here before we ask for advancement to stage three, I would very much appreciate hearing about them. + +CDA: JRL? + +JRL: In the slides yesterday you were presenting various transforms, but I don’t think in the original issue, I don’t think the code that we identified was actually produced by Babel, it was subtly different, I’m curious if we went through the work to find out what bundler did it and we update that generator and make sure it is not going to fail once we change this. + +NRO: I’m not 100% sure now what we have tried, but we had something that looked something like that, Babel it could have been something else. The reason it was tried because it private field was not, because there was something weird there, like Babel was in a position like that. And also to produce the output by chaining to other tools. + +JRL: Yeah. My point is maybe this was produced by a different bundler, it is entirely possible it is Babel and fed through another tool. But if there is another bundler producing class static instantiation the same way, we should update that bundler as well. + +NRO: There’s swc, but we copied the issue one-to-one in the repo. I checked all of the other main tools and none of them does the same. Most of the tools compile static blocks and static fields at the same time. + +JRL: Okay. + +CDA: I’m sorry. I’m inputting stuff in TCQ and can’t see the queue right now. Forgot who was next. + +NRO: It is empty. + +CDA: It is empty. Okay. + +MM: All right. I think this topic is done. Thank you, OFR. + +CDA: Yes. Thank you. + +### Speaker's Summary of Key Points + +The June uptick at [V8ExtendingNonExtensibleWithPrivate](https://chromestatus.com/metrics/feature/timeline/popularity/5209) is noise, since the graph shows month by month relative occurrence of a low-count event (6-10 in absolute numbers). + +### Conclusion + +No more blockers. Modulo test262 integration the proposal should be able to advance at the next occasion. + +## Continuation: [~~Measure~~Amount](https://github.com/tc39/proposal-measure) for Stage 2 + +Presenter: Jesse Alama (JMN) + +* [proposal](https://github.com/tc39/proposal-measure) +* [slides](https://docs.google.com/presentation/d/1my6X1ODDckzJmtcWcFI9hRF_I06Z4RQwrq81lbo8wPM/edit?usp=sharing) + +JMN: Yeah, I think there were a few issues that came up in our lively discussion yesterday. I didn’t want to ask for a big continuation. There was something we wanted to focus on here in this group in plenary where I think we could get some valuable feedback. That is on this discussion of Intl and the limits that it imposes on mathematical values and now we should think about that with amount or decimal or maybe this is an Intl issue. I think there is a lot of stuff we could discuss there. I think there is an issue that SFC wanted to discus, would you like to pull that up? + +SFC: Sure. I can share my screen. Basically what I wanted to do here is, is—gather some input on the direction to take with regards to `IntlMathematicalValue` feedback that we got yesterday from WH so we can, I—I think we had, had a discussion about how like this is technically a separate issue, but also it is somewhat related because, you know, one of the other open issues on the amount proposal is this idea of limits. And I think that, you know, thinking about how we can get good limit for `IntlMathematicalValue` could feedback in on what we choose to do for amount. It would be very helpful to get a little bit more of a concrete next steps regarding the `IntlMathematicalValue` limits which is the thing that we discussed yesterday with WH. So I guess, one thing is if I go to the, sorry, TC3, what is it, yeah, if I go in here and I go to To`IntlMathematicalValue`, this is the current spec text in Ecma-402. I believe this section is the problematic section. Basically what it is currently doing is it takes the mathematical value that is parsed out at the string and it calls RoundMVResult, which is basically casted to a number. And then if the number is either zero or infinity, then instead of propagating the IntlMathematicalValue out and instead basically clamps it, if you will, to either zero or infinity. And this means that, for example, a string mathematical value that is in excess of 10 to the 308th power is not representable in an IntlMathematicalValue. I believe that decimal goes up to about 6,000 as its maximum exponent. + +SFC: And, EAO also second here what are the underlying limits on the fraction implementation. I can say there that the limit is on the order of about two to the 15 as the maximum exponent which is still greater than the decimal limit. But I wanted to clarify from WH as well as anyone else who has feedback here what we think a reasonable limit should be and how that limit should be enforced. + +CDA: WH. + +WH: This is a method which produces a mathematical value. It is not its job the enforce implementation limits. I would delete that step altogether and just return the mathematical value. Depending on what you do with it later, you can impose implementation limits then. I filed issues 52 and 54 which are related to this. Issue 52 also discusses what to do with large or small exponents. So— + +SFC: That’s this one here, issue 52? + +WH: Yes, that’s the one. So what do we do if we get some ridiculously large values? Like 1.234e1446 is well within the range of Decimal128. Converting it to infinity is not okay, but what should we do with such things if we’re using them for amounts? Choices are throwing, outputting very long strings or outputting in exponential notation. I think throwing is a very bad choice, because it creates a land mine—these things are usually controlled by user input and you don’t want users inputting valid values that crash your site. What about outputting very long strings? Decimal128 alone can generate strings up to 6-7 thousand characters. It won’t crash your implementation, but it is still not very nice. Or we can output them in exponential notation. If we output them in exponential notation, there is no reason to enforce a mathematical limit there at all. Then we should have the discussion of, if we do exponential notation, and then people might want exponential notation, at which point do you switch from normal to exponential notation? + +CDA: SFC? + +SFC: Yeah. I mean in terms of what we do for these large numbers I think that exponential notation is probably the right approach. Just because we don’t want to get into a situation where it’s very easy as it currently is to have like a one-line of code that causes the browser to allocate like a megabyte or more of string. This is an issue that has been raised before and exponential notation limits the length of the output to the length of the input, which I think it is a beneficial property to have. + +WH: I agree. + +SFC: Do you want me to also pull up issue 54? I also have another topic on the queue. + +WH: Your choice. + +SFC: Let me go to my next topic on the queue. Which is actually, no, it is not the next topic. The second topic. + +CDA: I moved OFR’s up because it looked to be a reply to that topic. + +SFC: That is fine. + +OFR: It is sort of related. If I put my implementer’s hat on, I’m trying to wrap my head around what is actually the type of this value. It also seems like the idea is that you can do operations on the value like rounding or also addition in certain cases, I’m not entirely sure. And so, in my mind it would make sense if the value is like some sort of numerical object that we already have in the system and not something completely new that has like a new different set of semantics. So the question is basically, isn’t this, isn’t this is basically decimal? What the value is? + +CDA: JMN. + +JMN: It is perfectly reasonable to put the implementer hat on. We need that kind of feedback here. To emphasize one thing that amount doesn’t do, which is arithmetic. So at the moment, amount is just a kind of a data holder. If you want to think about it as like a string, that’s fine. From an implementation point of view. Or maybe it could be like a BigInt coefficient and some kind of number exponent. Would be another reasonable approach. But again, so, the thinking is that there’s no arithmetic here. And at most, you would be doing is some rounding. So I mean, if that counts as arithmetic, you can say there is some arithmetic there, but there is a very narrow range of operations involved. + +CDA: EAO? + +EAO: For the general case, the values represented within an Amount can be greater than the range of values that can be accommodated by a Decimal. The expectation is that by far vast majority of uses of Amount are going to have numerical values that can be represented even by a Number. But the general case, the specific issue under consideration here is what are the maximum and minimum precision limits here. And at the moment, those are arbitrary, which means it is the same as with BigInt where the limits of precision are defined by the implementation. So it could be that an implementation would end up choosing it to limit to the Decimal128 range, but honestly that seems unlikely because that is not able to represent our BigInts for instance. So likely, it would be close to what ever you happen to be using currently in your implementation to hold an `IntlMathematicalValue`. + +SFC: I think I’m next on the queue. + +CDA: Yep, go ahead. + +SFC: Okay. I wanted to circle back to this, something that WH had suggested which is an implementation defined limit. If I go into, for example, the Decimal128 which is based on issue 98. I’ll let you look at this. You can click the links to get there. But the topic of is it better to have implementation defined limit versus spec defined limit definitely came up there. I think this is a question that is good for plenary, because, you know, having well-defined behavior seems to be beneficial here. It’s kind of odd if, you know, one engine decides to have one limit and another engine decides to have another limit especially if the limits is fairly small, EAO, for example, says an implantation can have decimal as the limit for an amount and another one chooses to have a much different limit this could cause code to be incompatible across the two engines, that seems to be a not good outcome. So I was hoping to get some feedback there on like whether it should be defined or not. Definitely when we discussed this in the context of IntlMathematicalValue initially, this issue 98, we discussed this in 2022. There seems to be agreement that there would be like a spec limit. And—yeah. This is, this is, for example, me discussing in the TG2 meeting and we also discussed in a TG1 meeting, that a spec defined limit is superior. This is different than BigInt, because I also found the BigInt discussion, in which case it is implementation-defined was the outcome. That was five years before we added `IntlMathematicalValue`. Is there a queue to discuss this. Yes, there is, let’s go ahead and go through the queue. + +CDA: WH. + +WH: This has come up periodically. More than 20 years ago we had the same discussion about numerical literals: if you type a numerical literal into EcmaScript and enter the ridiculous number of digits, is an implantation required to consider all of them when converting that thing to a Number or give up at some point and consider only a limited set of leading digits? And we decided to not nail it down completely. So—there is some latitude for implementations as to how many digits they want to consider of precision. Now, for this specific case of Amount, I would be very sad if any implementation chose a limit that’s less than the 34 digits that Decimal128 gives you. I think something like 100 significant digits is reasonable. This comes up for both the number of digits you might provide in your string as well as the maximum exponent. + +CDA: EAO? + +EAO: So, as I understand it, by far the closest recent historical analogy to one way of answering this question is the way we ended up with a BigInt that does not have a spec mandated maximum limit, but the implementation upper limits for those. So I don’t know if anyone can comment offhand here, but getting feedback within this context for Amount in particular, but also to some extent the `Intl.NumberFormat` limits on whether there are any concerns that are not necessarily obvious from implementer point of view that ought to be taken into account with, for example, removing the limits in `Intl.NumberFormat` and not adding limits to Amount in the spec. + +CDA: NRO? + +NRO: Yeah. This, should we talk about limits on mathematical or number of significant digits? Because if I have, let’s say I started things as a BigInt and exponent, if I have like 1e(one trillion), that is like it is represents 1e(one trillion), it is a huge number, but it doesn’t require that much memory to represent it. It is very likely someone starts writing down a number with hundreds of digits, because you need to have the string with hundreds of digits there. WH? + +WH: The answer to your question is both. We need to consider both. The reason I want to have exponential notation is that we don’t want amplification. If someone types in “1E1000000000000”, we don’t want to generate a string with a trillion digits, it is better to output that in exponential notation. So, as long as number of characters of the output is not significantly bigger than the number of characters in the input I’m happy. But I am also okay with implementations defining some limits on both the exponent range and on the number of digits in the mantissa as long as those limits are at least as high as the range provided by Decimal128 for both of them. + +CDA: SFC. + +SFC: Yeah, to respond to NRO’s question. Once difference here is every implementation that I’m aware of, that stores these large numbers, the exponent is typically stored as an integer. Like as a small integer. And then the mantissa is stored like on the heap somewhere. Right? Which means that the limit for the exponent is necessary much smaller because it doesn’t have, it doesn’t have the ability to overflow to the heap. And the limit on the mantissa is more of like how, like how much memory do you want to have to allocate to store all of those digits? + +CDA: All right. + +SFC: Is there anyone else in the queue? + +CDA: Nope. + +SFC: So, I guess my, my conclusion here, or the next steps rather, not, not a conclusion, but more of next steps I think we should, you know, go back and look again at what the more reasonable limits would be. I think WH gave us good goalposts there, which is useful. I think this discussion about implementation-defined versus spec-defined is not a question that has been completely resolved. But I think in terms of giving like almost, you know, a reasonable at least minimal supported range is something that I think we should all be able to hopefully agree on. I plan to follow-up with maybe a pull request in the next meeting or in a future meeting, I’ll try to target for, you know, a meeting later this year to address this question so that we can continue iterating. We have a number of issues now, the one that I filed and the ones that WH’s created to continue this discussion on GitHub. + +CDA: Great. + +WH: Sounds good. + +CDA: So SFC, am I correct in assuming that we bundled your request to discuss revisiting limits issue into this one? + +SFC: Yes. That’s my understanding. Unless JMN am anything else to discuss in the decimal continuation. + +JMN: Yeah. That’s right. I was stepping on SFC’s toes with the continuation. There is a number of other issues, because as SFC positive said we can iterate on these in the champion group. We don’t need to hash those out in plenary right now. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +* List +* of +* things + +## Import Buffer—reviewers + +* [proposal](https://github.com/styfle/proposal-import-buffer) + +CDA: All right. Last order of business for the day, stage two reviewers for import buffer. I think that STY champion is no longer here. But we need to do this anyway, we can do it without him. NRO has volunteered to review. We need at least two. + +NRO: We need a reviewer that is not active in the— + +CDA: I don’t think that I caught that. + +NRO: It would be great to have a reviewer that was not active in the module harmony group. + +JSL: I can as well. + +CDA: JSL—EAO? + +EAO: I opened my mouth about the type: text thing, so maybe it’s fair to review this for 2.7. + +CDA: Okay. Designated reviewers, NRO, EAO, and JSL. Perfect. + +### Conclusion + +Designated reviewers: NRO, EAO, and JSL + +## "write your own comments" continuation + +Presenter: Kevin Gibbons (KG) + +KG: Yes, I just wanted to say that I opened a draft PR to the how we work repository adding my proposed policy for use of LLMs and authoring your comments. I tried to interpret the feedback about explicitly allowing the use of LLMs for proofreading, but open to others there, anyone who is interested in that topic, please take a look at how we work repository. Thanks. + +CDA: I just have a comment on that. I think it is fine to work on it there, there is a separate question of people want it in the code of conduct or not. Assuming it is not going in the code of conduct, is how we work where it would go otherwise? My understanding the how we work repo contents were like the committee explicitly decided that we didn’t need plenary consensus for what is in that repository. And if this is something that’s going to be enforceable via the code of conduct, how we work may not be the best place for it. It is probably fine if we all agreed to it, but just, if it didn’t go there, where else might it go? Like could it go in the contributing guide for 262? + +KG: I have definitely put normative conventions in there. And those are things that we do have consensus for. So I don’t think it is true that repository is only things that that don’t require consensus. Some of the things, certainly, but not all of them. + +JHD: Additionally, once the document exists, wherever it lives, we should be linking to it from multiple places, including the 262 contribution guide and so on. + +CDA: Okay. Great, thanks for doing that, KG. MF? + +MF: Yeah, I think that we want this to come up—like this is about posting content, would we also want the issue template and request template to include the links to it and like if, does it have to be anywhere other than that place? I don’t actually know where those live. They’re in some shared repo. Right? + +CDA: PR templates. + +MF: Yeah, issue templates and PR templates, they are in the repo. Right? + +CDA: No, I don’t believe they are, I think they are specific to the repositories they live in. + +MF: Okay. Is it not handled like security? Security is in the shared repo. All of the repos get the same security. I’ll get off the queue. + +AKI: It is the .github repo at the org level. Yeah. Yeah. + +MF: Thank you. + +CDA: Yes, the CONTRIBUTING.md, SECURITY.md. CODEOFCONDUCT.md. They live in the .github repo and what GitHub calls community health files, they will propagate automatically into all of the repos of the org if there are not files of the same name, but there are no issue templates in the .github repository. So any issue templates are in their respective repositories. + +MF: Okay. They should be in this repository, we’re mostly seeing this abuse as GitHub issues, pull requests, and ES Discourse posts. + +CDA: Okay. Sounds good to me. Any other parting thoughts before we adjourn for the day? + +AKI: Yes. + +CDA: Okay. Yes, go ahead. + +AKI: Just a reminder, everyone, to, sorry my washing machine is going. A reminder everyone to please double check your summaries and conclusions. Make sure that they make sense. Make sure that everything you said today and yesterday made sense. If you gave a verbal summary conclusion. Go and make sure it is readable. Ideally in bullet points, I don’t think It is mandatory, as long as it is short enough to be readable. Just that. Thank you. + +CDA: Thank you, AKI. All right. With that, we will give everybody some time back and see everyone tomorrow. Thanks, everyone. diff --git a/meetings/2025-07/july-31.md b/meetings/2025-07/july-31.md new file mode 100644 index 0000000..e0f7700 --- /dev/null +++ b/meetings/2025-07/july-31.md @@ -0,0 +1,711 @@ +# 109th TC39 Meeting + +Day Four—31 July 2025 + +**Attendees:** + +| Name | Abbreviation | Organization | +|--------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Sergey Rubanov | SRV | Invited Expert | +| Daniel Minor | DLM | Mozilla | +| Dmitry Makhnev | DJM | JetBrains | +| Istvan Sebestyen | IS | Ecma | +| Jordan Harband | JHD | HeroDevs | +| Zbyszek Tenerowicz | ZTZ | Consensys | +| Chris de Almeida | CDA | IBM | +| Daniel Rosenwasser | DRR | Microsoft | +| Eemeli Aro | EAO | Mozilla | +| Samina Husain | SHN | Ecma International | +| Aki Rose Braun | AKI | Ecma International | +| Olivier Flückiger | OFR | Google | + +## Opening & Welcome + +Presenter: Ujjwal Sharma (USA) + +USA: Good morning. Hello. time can we ask for two volunteers of notetaking before we start with topics which should be the final session of topics for this meeting. I see that people are filing in. 28 of us are here already. We need two. So that’s great odds, I guess. If any of you would be willing to do the first half or the first presentation, or just these next 30 minutes, that would be fine as well. And we could rotate as we go for the next four— + +JRL: I can help with notes. + +USA: Thank you, JRL. Can we have one person to help JRL out as well and would be ready to go? I see that people are still coming in. Perhaps some of our newly joined delegates would like to help out with notes. Also looking at, RBR, in the meantime would you like to try out screen share and see everything is in order? + +USA: First off we have `Object.propertyCount` for stage two. So—in the meantime, would somebody be so nice as to volunteer for taking notes for either this slot or any fraction of it? There’s four equally long sessions. + +MF: I can do the first hour, this is MF. + +## `Object.propertyCount` for Stage 2 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/tc39/proposal-object-property-count) +* [slides](TODO) + +USA: Okay. You can start. + +RBR: Thank you very much. All right. So last time we spoke about proposal for `Object.propertyCount`. And we decided on-stage one in this case. And in this case, there were like some questions around some details that, I believe the overall idea for the most common use cases was agreed upon that it’s good. Now, after giving that some time I saw it is probably good to try to limit it a little bit more to less use cases and to have separate proposals for some of the other aspects because it is more explicit. Because the problem statement that we talked about last time was that, it is mainly to overcome performance and correctness issues. That’s just the same. In this case, actually for variety of different use cases or algorithms. So input validation, for example, guarding against too big input objects you want to compare objects and have telemetry data and be able to compare that easily. + +RBR: Many algorithms are needed for some kind of a fast pass in this case. Generally, am I look for. Example, `Object.keys().length`, that just pops up, it was always originally the options and detect string properties on live objects, it would have been also to detect dense or sparse arrays. So, this was originally like some of the aspects that were possible with it. Because like I showcased thin proposal itself, like a big list of usages for different things. Like very popular modules or frameworks like angular, react, etc., etc., and note, and lodash. There is really a lot of usage of a couple of these patterns. + +RBR: We did discuss if something like, for example, the non-index properties, if it is something common or not and to have extra properties to check for. I wanted to highlight in this case, actually, regular any expansion, with object match, match all on something, we actually receive an array with non-index string properties on them. Like the last index, groups, etc. So it is actually something that is not too uncommon to also have even spec as such. + +RBR: I just want to briefly go through the examples, again, like we have only enumerable symbols, for example, in this case, these cases do care about receiving these enumerable symbols. So it is not only about getting the count of them, but they actually went to get the symbol properties as well. We do have a couple of check where’s they only care about the lengths for some comparison, that is for react, for example, Next.js. For example, any non-symbol property, only the length is checked for example in react, and checking just generally lengths for is there a property, done in VS Code. I did not put any examples in the slides at the moment for anything that would reflect object keys because the list would be too big. Like there are more cases and more examples for the others I just wanted to highlight these again. We do have array index checks and Lo dash, and Node.js is definitely something I have concrete examples for, anything in Node.js that is doing cancel log that allows you to inspect would use that to differentiate numerable versus non-enumerable symbols, for example, and nonequal and gone equivalence parts. So, these all use that. + +RBR: As conclusion from that, we can say, okay. Last time we already agreed that the most common use case is `Object.keys().length` because there is probably some usage of the pattern in any code base. I believe that's why I don’t have to highlight that anymore because of the very, very high frequency of that. What we do have, however, also like symbol length checks. And this case, enumerable symbols, non-enumerable symbols, both in this case, and index properties is always checked for in multiple cases. It is definitely something we can see in the wild. + +RBR: Nevertheless, I wanted to simplify the current proposal. So I’m trying to push out the non-indexed properties and also the sparse array detection as such. Because in this case it is implicit. So like that we can have a separate proposal for these aspects and just simplify the overall options of the proposal. + +RBR: I believe each of the proposals does provide benefit on their own as well as the current still, like the minified object property count proposal. And while like altogether they can provide an even bigger benefit if used alongside. + +RBR: Then—yeah. One of the benefits is, one of the discussion was also, at least in a GitHub issue, there is the difficulty of defining what a non-index string property looks like, because we have a different handling on type arrays versus arrays, that is something that would not fall in here anymore and we can just concentrate on the most common use cases. We also have explicit checks in this case for is an array dense or sparse which we could then handle instead. + +RBR: And there, anything else pretty much as before, right? It should only handle own properties, and enumerability is defined by the enumerable parameter in the options. And it should avoid administer mead at array allocation and optimized scope later in the spec that’s not the case. Just like the high level spec is pretty much still the same, because there was no big change besides removing, in the key types like instead of having index and non-index string ones, we now have just string as key type, as a combination of them. That is all that changed. I can also show the concrete spec if wished. I believe that’s something that we can read individually, though. So I would only open that it is explicitly requested. + +RBR: Just to highlight again the use cases. So we have improved readability and explicit intent when we want to receive the number of properties on any object type. That should be significantly faster in many, many cases and such also reduce memory overhead when doing so and simplifies the code as well. That is pretty much just another information. + +RBR: Like I did, I—wanted to really just focus on the minifying the original proposal. And that’s why I don’t, didn’t go too much into detail about the complete spec anymore as last time. If that’s requested, we can go into that. Otherwise, is that addressing originally mentioned issues for everyone? Like as I said, I’m going to showcase solutions for the remove parts in different proposals afterwards. + +USA: On the queue we have a few topics. First up we have NRO. + +NRO: Yeah. Hi. This is not a new topic, but many companies in this committee have like meetings the week before plenary to go through the agenda and look at proposals and think of a shared position. And this proposal there is no slides in the agenda. And the proposed semantics changed five minutes before this meeting. I just saw there was a request like 30 minutes ago. So it was very difficult to like coordinate with a company position for me fast, because just not clear what was going to be proposed. + +RBR: Yeah. The PR was up longer. But I didn’t merge it. That’s true. I’m sorry. And I have to generally apologize in this case. I am currently on vacation and I had very little Internet access during that. + +NRO: Yeah. I also have concrete topics, but there is MM on the queue before me. + +MM: So the, so my concern is that the next three proposals also by RBR, or by RBR and JHD. Seem to be very related to this proposal. In fact, they even say they are related. + +RBR: Yes. + +MM: So before asking, before, I would like to, to at least understand the problem statements of the next three proposals before being asked to make a decision on advancement of this proposal. I’d like to be able to consider them altogether as a set of closely related goals. That perhaps could be more pleasantly addressed with an API that covers all of them or not. I mean, I just don’t understand yet. + +RBR: Uh-huh. So I would be fine with that. If everyone else is fine and to just continue with the others and we can discuss them in the end. + +MM: Good. + +USA: Would anybody be opposed to that for asking for consensus altogether? At the end of the four presentations? + +KG: I think it is fine to ask for consensus for the various pieces individually at the end. I think it is pretty likely that we’re not going to be interested in advancing all-or-nothing though. There might be some subset. + +MM: I’m not suggesting that we advance all-or-nothing, I’m just suggesting that I would like to understand the other three before being asked to advance this one. + +RBR: Uh-huh. + +USA: Also I see support from MF on the queue. So let’s proceed that way then. However, should we go through the queue of this item before moving on to the next one? + +MM: I believe so. + +USA: All right. RBR, if that is fine by you, then next on the queue we have NRO. + +NRO: Yeah. So can you say more about what is the use case for listing only non-enumerable properties? It is clear to me why there is this case for listing only enumerable ones and listing all of them. But looking into the examples linked I could not find a case of nonenumerable. So there is a get in the enumerable properties, so you need to get in the nonenumerable to have all of them. But it was not just non-enumerable. + +RBR: That is, I believe, we do have cases where we want all of them, I found those. I did find a lot of cases that is more common to have a filter for only enumerable ones, what you’re saying is like, we are, I missed examples for only nonenumerable ones. I agree. And I believe that is a very rare use case. In this case it feels more of like how would an API look like of some, like, this, this fashion where we just don’t do the non-enumerable ones as well. Because I believe it is simpler to just have the option one way or the other. Like even from implementation standpoint, it should not really be much different. + +NRO: Okay. The reason I’m asking is because I find it weird to have these like enumerable option that is like a Boolean or a string all. Like if we want to keep the three states, it probably should be an enum with three strings, but if there isn’t a use case for having non-enumerable properties, this could be just include non-enumerable Boolean without having the three states if we only need two of them. + +RBR: That is actually a very good point. JHD and me we discussed the "all" versus Boolean aspect it is going to come up in the other proposal as well. I believe that is a good point. I’m also totally open to having a string instead. I don’t believe there is any case like this before in the spec anywhere. So it would be using definitely if it would be introduced a Boolean and string, and I’m totally open to alternatives as such. I believe as long as the overall functionality will still be provided it wouldn’t matter, I believe. We could subsequently remove the non-enumerable case even though as I said, like when I see an API like that, I would normally expect to have like—one side and the other side as well. + +NRO: Yeah. Yeah, I mean, like if the name was a different option, like the one you suggested, then if is true or false, I understand, I agree that, like true or "all" is weird, but there are multiple topics so let’s go through the queue. + +RBR: Uh-huh. + +JHD: Yeah. So I just wanted to add, I agree with what RBR said, the use case for only nonenumerables getting that number is rare. I don’t think it never happens. I think sometimes you do want to know if there are nonenumerable properties on an object, but it is more like we definitely need enumerables and we definitely need everything. Those are, you know, so we have those two things and it’s, it would be a really weird API in my opinion like when you can, like—it would be really weird API in my opinion to have one half, like one subset in the entirety versus either just the two subsets or the three states of either subset or both. So it like doesn’t make any sense to me why would we include just enumerables and all and just those two options. If we only have two options it would be enumerables and non-enumerables and to get all you add them together. + +USA: We have a reply by MF on the queue. + +MF: Yeah. So I think everything we add to the language should be able to be justified. We’ve done, we had this discussion before with that TypedArray getter, whatever it was. I think we made a mistake there as well just by trying to fill in a matrix when there is no reason to have that thing. It was very clear with that last time this occurred. With this time I’m not, you know, 100% sure that there’s absolutely no use cases for the option we would omit, but like I just, I don’t want to add things to the language that don’t have justification and they don’t have existing people using them. We wouldn’t do that if this feature was standalone. It shouldn’t change things just because it’s part of some option matrix in the current proposal. + +JHD: Yeah. I completely understand that philosophy and I think that with the timed `Array.getter` one, like that is a—like that was the entirety of the purpose was to fill in the matrix. So therefore your philosophy like completely, it is appropriate to debate there. In this case, it’s, it’s only partially appropriate because as was describing, right? It is like, it is a weird API design to have only enumerable and all. And if we have only enumerable and nonenumerable then we made the all case less ergonomic and also like perhaps less performance. And so having all three, you know, is like, is not weird to me whereas just having enumerable and all is. Does that make sense? I don’t know, it feels like a very inelegant API if it just has enumerable and "all". + +MF: I do understand how it can change the ergonomics of the design. But everything I said still stands. Whether this is introduced as a standalone proposal or introduced as part of some matrix in a proposal with many other things, I don’t think that would change our criteria for whether we include it. I understand that might lead to awkward APIs and that is a trade-off we have to consider for those use cases. + +RBR: So, like I believe, when I probably look for it hard enough I probably will find some, that is my expectation, but I do expect it to be a rare case. It is not a case that would never come up. That’s not the situation. And as JHD said, I believe we only pretty much have, or if I would choose, let’s say it that way, I would probably either just use three states, which is definitely my favorite one because it has the least performance overhead. Or it would only be possible to either get enumerable or non-enumerable ones, in which case we only add this case so you only do two calls in the end, which is also a bit weird. + +USA: We have SHS on the queue. + +SHS: I’m going to plus one what ?? is saying. + +USA: And KG. + +KG: I’m going to plus one what MF is saying, we have `Object.keys` and `Object.getOwnPropertyNames`, this gives you the only enumerable and nonenumerable string keys, there is no object to getNonenumerablePropertyNames. So two of the three things in the matrix is the current state of affairs, that is a perfectly good state of affairs. + +JHD: I don’t think that is a good thing, it is gross, and they are named. + +KG: I agree the names are bad. I don’t think there is anything wrong with only having these APIs. These are the APIs people use, we don’t need to provide additional APIs just to fill out the matrix. There are probably two people in the world that would use them. That’s not a good enough reason. + +USA: Okay. Speaking of there is less than 10 minutes. So, you need to be faster. Next on the queue is JRL. + +JRL: Responding to JHD point here, he is arguing to have an that returns enumerable and nonenumerable and only those two subsets. That is going to break the fast path where you can do only all look up to see how many properties you have. That will only work for all. I don’t think we should design an API that can never hit the fast path. + +USA: Next we have NRO. + +NRO: I was suggested in the two states, not enumerable and nonenumerable. But enumerable and all. The option can be world today be true to mean everything, instead of just one. And for the other case with nonenumerable ones, you can do the difference between two numbers, that’s just the rare case. And the weirdness of an overload between a Boolean and a string constant is higher than the input weirdness, I find it weird to have to get all of the API properties. + +USA: Next we have KG. + +KG: Yeah, I also want to ask about motivation for the string versus symbol versus all. I couldn’t find any examples in my code bases where this was a thing that mattered. In fact, literally the only examples I was able to find were `Object.keys().length`. I’m happy for a proposal that makes `Object.keys.length`. I have not found much justification for the rest of it. Separately as a question of the API design. I don’t really like having this options bag. I would prefer to have separate method at least for string versus symbol opposed to string versus symbol or I symbol, because it is more discoverable, and matches `Object.propertyCount`. And in fact, a sort of obvious way of doing both of these things is that we can have `Object.keysLength`, and `Object.getOwnPropertyNamesLength`, and `Object.getOwnPropertySymbolsLength`. And just be done with it at that point and not have options bags and not have renames and not have to worry about filling in the matrix. We would just be taking the existing things and say when you are using the existing things, but you want to length of the thing and not the whole array, we have a function that gives you that. + +RBR: So about the options. That is something JHD and me discussed, for example, to have individual Boolean types for say symbols and strings. The difficult part was the default would have opposing Booleans in this case, because the string properties would be true. While symbols would be false to represent `Object.keys` by default. And that felt a bit off to have like multiple Boolean options that are not like all identical. + +JHD: Yeah, in general having two booleans that covers more states to describe three states is not a good option design. But to your point, Kevin, you’re saying no options bag at all. That sidestep that question of how should the options bag. I agree with you, that would be, that, if options bags are seen as a bad thing, and you see separate methods as more discoverable, both of which I disagree with, but assuming those are the case, then the path you’re suggesting certainly is the most straightforward. But given. + +KG: It is definitely more discoverable. As to whether they are bad in general, I don’t want to offer an opinion on that. But I want to point out, these are very, very close cousins of existing APIs. + +JHD: That’s true. But discoverable. I don’t think that people think that get keys exist. But if there are options bag on get keys, everyone would know about it. And IDE, autocomplete and stuff and like pop-ups would fill in the options bag information and that’s much more discoverable usually then having to change to a completely separate method to even know it exists. So I think actually the opposite of what you’re saying is true about discoverability. And then separately, the existing names are inconsistent and gross and weird. + +KG: They are the existing names. + +JHD: Yes. Yes, yes. We have done that, for example, we keep the receiver argument on the array methods that take callbacks for consistency even though it is gross and no one wants to use it. That’s a trade-off we do have to make sometimes. But it is not something we are shackled to. We can also choose, that the existing name, the existing pattern is gross and come up with a new better pattern and just decide to be okay with the inconsistency because the improvement is so much better. Now, that may not be worth it in this case for you, that is a position to have. But like—yeah. I think that like continuing—looking at like four or five new methods with gross names that are inconsistent with each other just because those mistakes were made a decade plus in the past seems unfortunately to me. + +KG: So I have two specific things I would like to hear responses to. The first was, I do want to hear more about the justification for providing this functionality at all for beyond just `Object.keys`, like I said the only cases I have been able to find from my own code bases were object keys length. If you want to do more than that, it needs to be separately justified. I’m completely fine with the justification for `Object.keys().length`. But not beyond that. The second thing, a concrete suggestion for a way to go about creating this functionality, which is to have, for example, `Object.keysLength`, which would be literally Object.keys( …).length, which I think is very clear. Matches the methods that people are already familiar with. Answers the question of like which functionality to be provide given there is a handful of the methods that provide you an array of properly names we would tack on a new version of each of those and bees done with it. I would like to hear responses to those two specific suggestions. + +RBR: If I may, let’s start with the latter one first. Like I actually would like to add an option to, for example, like that’s one of the proposals for `Object.getOwnPropertySymbols` to only return enumerable or nonenumerable ones which is actually way more, like if you, and that hopefully addresses implicitly your questions around that. Because I do, like when I look for it’s not as common, like I would say `Object.keys().lengths` is like, you don’t find any code base without it almost. It is not common. Just with the other code base, if you look of the symbols one, you will find it in almost all popular modules as well. At least once. And as such— + +KG: That was not my experience. I looked at several places and I think I found one place that needed the number of symbols. + +RBR: Like if you want, we can have a look at. + +KG: Yeah. I don’t think that has to be done now. Just, having a bunch of examples in the README would go a long way. + +RBR: I did, like there is an issue open about it where I already mentioned some. I did have some in the slides now. And I can just add a commit to, add to read me a couple of the examples in this case. + +USA: All right. We’re at time for this particular session. We have two responses to KG on the queue. And a new topic by MM. How do we want to proceed? Do you want— + +MM: I can postpone my topic until the end of all of the presentations. + +USA: Thank you, MM. And what about the responses to this? Can we come back to them later. + +CDA: This are just end of message if you just want to read them. + +USA: Okay. I can say plus one to JHD’s argument on discoverability. That said, I don’t see a need for optimized counting of anything other than enumerable own keys. That’s what was mentioned earlier by ZTZ. And JHD said `Reflect.ownKeys(…).length` appears a bit, too, although no wear as near as often as `Object.keys`. So those were the two responses and we can move on with the next item. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +(see end of day) + +## `Array.isSparse` for Stage 1 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/BridgeAR/isSparse) +* [slides](TODO) + +Reply: See comments below, it can _sometimes_ be O(1) but is sometimes O(N) EOM + +RBR: Yep. Yeah. Okay. So, `Array.isSparse`. And like it is something where I actually, I’m not certain if `Array.isSparse` is the greatest name to say as the very first thing of this proposal because maybe it should be called `Array.isDense` instead. But let’s have a look at it. + +RBR: So, I do want to detect if an array is dense or sparse because of when you want to write like very correct code and you have a high performance cliff to address that. Because detecting, if an `Array.isSparse` is quite difficult in that case. Very briefly, I believe everyone in this round knows what a sparse array is, but nevertheless, we are talking of an array where we have an index that is in between zero and lengths of the array. Where it doesn’t have a property at that index. So there is a hole which is not even represented with an undefined as such. + +RBR: And the motivation and then, well, you have a correctness pitfall and potentially slow code when you’re using sparse arrays because they need to have more optimizations to be handled properly. Some code reinvents to wheel to check for that. Some just accept it, while it can have actually weird outcomes and some methods, for example, when you have like sword, for example, can have a different outcome. And adding `Array.isSparse` would handle one of the cases we removed from the `Object.propertyCount` proposal because with that it was implicitly possible to know if an array would be sparse or not. Now it is just explicitly handled by a explicit proposal, very clear. A static message. You know if it is sparse or not. You can go on. Sparse is a term that is such by having a hole or not. Because like, there’s one special case it would have an exception when you just started an array. For example, with `new Array(100)`. And then you fill it up afterwards. That would still be like nonsparse in this case. Even though it started sparse. It should be very trivial to implement and get optimized code in userland. + +RBR: The algorithm is straight forward, we check if it is an array, otherwise return false. We could theoretically throw in case a non-array is passed through. I wouldn’t mind that. That is up to debate, I would say. And then we just check for the lengths and check for each of the properties being there. If it is then it is true. Otherwise, it is false. + +RBR: We already have differentiation of internal types in V8 and SpiderMonkey and also JavaScriptCore as far as I know to handle sparse arrays, returning that information should be straightforward. Only proxies would be slow in this case. + +RBR: And examples what it would look like, that would be something to debate, for example, if we want to handle array-like objects that is something we can have a look at. Would definitely help for correctness in a couple of cases. Because we have a hole, in this case, for example, we would just get a NaN, while a sparse array would keep the hole as such. And like what is the hole? Like not a number. It is very clear. Well, the hole is a bit difficult to even understand what it stands for. And there are a couple of edge cases that are not like—it is difficult to handle even spec-wise, I believe, and sort and stuff. + +RBR: As soon as, like in V8, for example, there is a very nice blog post about the issue when you ever access an entry outside of regular indexes. In this case, like any code that handles that part will actually run slower. So it’s definitely a performance cliff that we want to prevent and checking for that is positive overall. + +RBR: Where it can be used as in data validations, when we are serializing, we want to have a correctness. And we want to have it spherical, if something does it would leak through the hole. Okay. Where do we use it at the moment or check against situations like that? For example in node, every longing, every assumption, Lodash, all uses that, and check for holes. Lodash definitely does that. Jest does it for snapshots and diff logic. fast-deep-equal is a library to compare fair objects and the same for deep-equal. object-inspect is logging or serialization part. And it handles it. And there is a lot of that actually in polyfilled code. Now, for example, polyfilled code wouldn’t become better by adding a new API, but theory, like a new future polyfills could actually be more performant in the case that API exists for a while. That is definitely a likely use case, I would say. + +RBR: Like I said initially, we could name it differently, for example, like array is dense. There is prior art and/or languages as well. That’s why I believe isSparse is good. For example, in Python, there is scientific Python, in this case, they have a sparse message, and also when you work with Android, there is a sparse array class. So you can definitely have a similar thing as in JavaScript in this case. And they have it for performance reasons and it always depends on trade-offs on the algorithm or the data structure used if one or the other would be faster. But this dedicated class would exist in case you wanted to have something like that. And I also already mentioned like the internal types. So like, for example, in V8 or SpiderMonkey you have like something like is fast packed elements kind or holey—element types, etc. + +RBR: And we don’t, like we would just expose an already existing information as such for users, which they can also know on their own. So we just want to have faster code. Which we can use to actually get correct code to run faster or we want to have very secure code to also run faster. And we don’t care about sparse arrays normally. We want to prevent sparse array usage. That is actually the major goal. So that’s the simple proposal here. It’s—I don’t know what questions exist because next steps would be getting input, addressing comments, the we want to go that way or not, like it would definitely be an alternative, to I mentioned differentiation of the index and `NonIndexStringProperties` versus `NonIndexStringProperties` on Object.propertyCount. + +CDA: WH. + +WH: Is the intent that this method do the work in constant time or would you want to provide this method even if it requires a linear scan through the entire array? + +RBR: So right now we always have to do the linear scan. There is no way around it, as such, I want to have it, it is an explicit API, it makes it very simple to read and understand. And performance would not be slower than with the colonel way of writing it. Ideally, and I believe that would be the case in pretty much any engine, they can optimize to an O(1) access. + +WH: I’m not sure I agree with this. If this were slow then people might not do an extra sparse array test and instead just run their algorithm directly avoiding an extra scan. + +RBR: That’s the major problem. So like, for example, I’m the maintainer of assert in Node.js which is also used for trip equals the same algorithm. Just as, instead of throwing it, it would return true or false. And like there, we have that check for, I don’t know, I believe—I don’t know, like—forever pretty much. And we actually have a very high overhead explicitly for collecting if an array isSparse. Like in this case I optimized that check way doing a very crude check, actually, I’m checking for first if a value is undefined and while I iterate over the array anyway, and then if it is undefined then I check the enumerability, I only check for the enumerability in that case, because the enumerability check is quite expensive. So it is like, I try to really push this case out with very crude code. Because it is so slow otherwise. And like mostly no one does that. No one would even check for something like that. They would just check for existence with enumerability and that is very, very expensive. + +WH: Okay. My concerns are about the usability of this. There are a couple aspects. One is polarity of the question this asks. If this thing returns true, then you’re guaranteed to have holes in the array. If this thing returns false, then you may or may not have holes in the array. So you haven’t really learned much. + +RBR: Why? + +WH: Because of array-like, such as the examples you have on the previous slide. + +RBR: So, actually in this case, because we—because I suggested not to handle the array like, you mean? + +WH: Yes. There are three possible results. One is you know it is dense. One is you know it is sparse. And the third case is, it’s a maybe. And this conflates that maybe with dense instead of sparse. + +JHD: But if you use— + +WH: I think it is a problem. + +JHD: If you use `Array.isArray` that distinguishes between the two cases. + +WH: Yes. + +JHD: Given this is a predicate it can only return two states, it is very bad for a predicate to throw. I’m not sure how to support that. + +WH: The fix for that is that most of the time you’re interested knowing for sure there are no holes in it. If you called this isDense, you could return false on things which are not arrays, and it would be fine. + +JHD: Like isDenseArray or something, yeah, that—I mean, that would, I think that would solve the same use case, sure. + +WH: Well, that would avoid the pitfall of it returning false and you forgetting— + +JHD: And also. + +WH:—and forgetting to check it being an Array and then operating on a sparse array thinking it is dense. + +JHD: That is a fair point, yep. + +WH: Going back to the other usability point, by having this as a built-in method, you’re encouraging folks to call this a lot more than if they have to do quite a bit of other work to implement it. You can imagine libraries calling multiple functions, each calling `Array.isSparse` internally. What they should be doing is calling it only once and caching the result. + +RBR: Well, but I mean, programming mistakes are there all along. And I would imagine that especially an API like that would be used where people understand also what it stands for. And otherwise, they probably don’t use it. + +WH: I’m not sure how that relates to the point I just made. My concern is that, if this thing is O(n), you want to encourage people to call it once on an array and cache the results, rather than calling it inside each method you call on the array. + +CDA: There are a lot of items on the queue. + +WH: Okay. Let’s continue. + +CDA: Can we go to MAH. + +MAH: Yeah, `Array.isSparse` is insufficient for proxies, for example, even then, it is a point that MF was making a little bit earlier. But I’m not convinced an array that was sparse and is made whole again, if you want, can actually be considered dense by, will be actually be implemented as dense the way the programmer using this API would want to know. I don’t think there are any ways of differentiating between the two sparse responses. + +RBR: I’m not sure I totally got that. So like right now it would be made whole again. Like if every hole, like if you start with a sparse array and you add them later on it would be considered fine. That is correct. With this API. + +MAH: With this API, but I don’t think—programmer using this, this API would expect if it is not sparse that it was—WH’s point, if it is not sparse, they would expect some kind of faster behavior probably. Possibly. Which cannot be really guaranteed with sparse array being made whole again. + +RBR: Correct. + +CDA: All right. There’s comments from SHS evidence. Sometimes O(1) and sometimes O(N). KM. + +KM: Yeah, I think there might be a misunderstanding how JSC works. There is a sparse mode, you have indices that are very part apart, you have an the Deci at 10 million, 5, and 3 billion, like nothing in between. Where it turns into a hash table. So this would always be ON in JSC and it would probably be a pretty, non-trivial amount of work to make it do the same thing as V8 and adding that simply for this checked with probably not be something we would want to do. The, I think, so I think, V8’s, if I understand correctly, actually, I would like OFR state it, he is next on the queue, their implantation still has this problem. You can go to holey, but that will not convert back once you become dense again, you always have to check if there is a hole in it. I don’t think developers will be aware of this, they think this is a simple check that the engine can fold away and do it constant time. It seems like a lot of the motivation of this is like trying to avoid a lot of the perform ins issues around holes and prototype chain walk. If I were to see any proposal, I guess I would rather see something that like converts arrays into like a TypedArray like things where index accesses just stop forwarding to prototype and it is an opt-in thing, you don’t have to worry about holes anymore. You have a hole, it is filled in with your value and doesn’t have to worry about someone to intercept your access. That would be a lot of work for engines, but feels like what a lot of people would expect out of arrays and arguably could have been what arrayals could have been from the beginning. That is are is my opinion, I’m sure there are people that disagree with me, but— + +CDA: OFR is on the queue, and an O(n) in V8. And major slow down. OFR. + +OFR: Maybe I can quickly explain. We have holey arrays and non-holey arrays internally that do not necessarily correspond to like the semantics you envision for your `Array.isSparse`, because as was mentioned, for example, if you fill the holes then, then the array becomes not sparse according to your API, but internally, the API would still keep this as a holey array. And but in general, I don’t think we also would want to make any guarantees what kind of internal representation we keep for the arrays. And there’s also a bit of, so there’s additional considerations going into this, like, for example, if we would go back and forth between different shapes of arrays, that also internally makes our code more polymorphic, because now where somewhere this array changes shape and it is used somewhere, then suddenly now we have to expect two different shapes. So actually it is better to stick with the shape that you decided early on quite often. + +OFR: So yeah, so in general, I think, depending on the API, that really depends exactly on the spec. Maybe sometimes we can give an answer in constant time, but I would not want to make any guarantee, basically no guarantees maybe we could, but typically no. + +RBR: May I directly, this has come up a couple of times. May I directly respond to that or give a comment. + +OFR: Yeah, sure. + +RBR: I’m aware of that and to be expected of that. I only know of one API that returns in V8 when you start with a holey array that goes back that is called `Array.prototype.fill`. It changes the shape back to a dense one as an exception of the rule. Normally, that is not the case. As such it is more like doing always, do 100% have an O(n), which I currently have. Or do I hopefully, I say, hopefully, have an O(1) instead? + +OFR: Here’s the problem. Like if you use this API in a way that you first check this and then iterate the array anyway or the keys anyway, you are slower doing this, right? Whereas if you did one loop and checking if it is sparse and work you do in one go it is faster. It is not even clear with the guarantees that we can give you that you would get an overall faster solution to your problem. + +CDA: All right. Sorry, just want to note that we have just a little bit over five minutes left for this topic. Lots of items in the queue. So please try to be brief. SHS. + +OFR: I actually talked about the wrong topic. The other topic was I don’t think a holey array is a massive slow down in V8. I would be curious to see actually data where it makes or breaks a program. I would be surprised by that. Typically if goes into dictionary mode, yeah that is a problem. But as long as it is just holey versus non-holey it is typically not that big of an issue. + +RBR: It is not directly related to that. It is more of to check for an array that is sparse, that’s the most expensive part. + +CDA: SHS. + +SHS: Yeah, I was going to say, if this is guaranteed O(1) it could be usable, but if it is always slow, you are saying is not possibly not useful. I don’t know if MM was replying to my comment on non-determine. I. But it is O1 because you have to take a slower path for holey array. And o1 in most cases, most arrays are dense, maybe that is fine as is. + +MM: Yes, I was responding to what I saw to be your question. Interpreting as implying nondeterminism, where it is, there is fast part potentially giving different answers for different implementations. + +SHS: Yeah, that is right. But the O(1) in the most common case and O(n) in some cases is still useful. + +MM: I understand and agree it is well motivated. I would rather not have the eight at all, then nondeterministic one. + +SHS: I’m backing off on the nondeterministic one. Chris KM. + +KM: Only possible to be O(1) in V8, it's always O(n) in JSC. The only reason we have to add dense arrays support simply because we have not found that the dense versus holey optimization was profitable. So we didn't implement it because we didn’t find it profitable. So to only add that optimization simply for this API that already feels like a bit of a foot gun would not be something that we would want to do. + +CDA: WH? + +WH: The non-deterministic case is also a funky communication channel—you can learn the history of how the array was made. + +CDA: MF? + +MF: Yeah. Some of what I want to say was already covered. I’m grateful for that feedback from the implementors. I do see a use case for this, I see the problem that you’re trying to solve here with this performance cliff. And if we can verify with data, as requested, that there is a performance cliff here, I do think it is worth solving. You know, a use case I see is that you have an API that takes what may be a sparse array as input and saves that and stores it as like a working buffer that it does many operations on over a long time. If you take that sa input, you want to do a check ahead of time to see if it is sparse or dense because you may want to do a copy into your own dense array, in fact, like densifying it, if necessary, so that all of the work that you do on it for a long period of time later is more performant. Again, this is assuming there is a big performance difference there. So I, if that can be shown, I do think that is a valid use case, but that does sound like pretty much what KM suggested, make this some kind of more efficient, guaranteed-dense data structure. And maybe that’s the route we want to go. But I do still support it, I do support investigating this problem area. + +CDA: NRO? + +NRO: Hi. Yes. Someone already briefly mentioned this. There is a potential foot gun with this proposal that is the common way we should expect to use these, I have a function, it receives an array, I check, it is dense, if it is dense, I go in the fast path, if not I go into the slow path. Like, for example, in slow path for every property check if their array actually has the property or not. However, like the denseness of the array can change when reading properties in the array, with the Array getter. So we need to find a way that people do not accentuate between fast and slow path depending checking whether it is dense in the beginning. Because then when they use the array the result might be changed. Like how, is this the way that the API is meant to be used or the expectation that before reading the property from the array first check if it is dense? + +RBR: That’s a very good comment. In general, I appreciate the feedback about it. Like I believe it is a difficult problem to solve right as such. Especially with the current feedback. I’m not certain anymore if this is an ideal API for it. I don’t know an alternative at the moment for it either. + +NRO: Maybe, there was a proposal from MM, but checking something that is stable. That means you can read things from it without worrying about the effects. Maybe something here, you’re looking for, checking if the array has owned property and nothing else. Like basically an array, but it is a different API from what you can see. Like a dense array that does not have an excess. + +CDA: So we are at time for this topic. I have captured the queue in case we want to return to this later. I believe that we agreed earlier to go through all of the topics before we, before you ask for consensus for advancement. Is that still accurate? Preference expressed by at least some folks. Okay. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +(see end of day) + +## `Array.getNonIndexStringProperties` for Stage 1 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/BridgeAR/array-get-non-index-string-properties) +* [slides](https://github.com/BridgeAR/array-get-non-index-string-properties/blob/main/slides/07.2025%20-%20proposal-slides.pdf) + +RBR: Thank you. I said before, I’m trying to simplify the object and property count proposal. And one part is actually getting the nonindexed string properties. We have a special case for arrays and TypedArrays in this case, which we don’t have, for example, for sets or maps. And like when you add an entry to a set or map it would, you don’t get that back when you call `Object.keys`, but you do get in the answered property that you would apply to a set or map. Now, this is something not too typical, but you, when you went to have very correct code you have to check for it. And like, this is most crucial for arrays. For correctness. So that’s why I propose with JHD a small helper that would accommodate that need to know if any extra properties are on there. I believe the most common use case is to see what properties are on an array. And not only know the count in this case. Like if you get the count, you pretty much care about getting them. So that’s why this added method is there. First of all, the name is very long. JHD and I discussed the quite a lot about the name if there is anymore precise naming for this type of property. We couldn’t come up with anything shorter. That’s why it is currently like that. If anyone has a different name, I’m very happy to hear that. And the main idea is to just return good evening, that is outside of the regular array index. In this case, that also prevents the ambiguity of what the index would look like. That’s why it is proposed on array directly. + +RBR: So motivation. Or to start again. Arrays can have own enumerable string properties that are nonnumeric indices, to handle them, you have to filter it in quite a weird way. In this case it has to be a number that starts with zero. And has to be up to the power of 32 minus one. So it is a bit weird to filter these entries. And yes, this [example on slide] is done in code currently. To showcase the differences, for example, when you inspect an array. Definitely node it is done. You can see that when you call council lock. + +RBR: We also have, as I mentioned earlier, on arrays, sometimes properties added. So maybe we do care about which ones exist, and this is when you call match, match all with regular exceptions and that’s just a return array with those non-index string properties. + +RBR: Python has something roughly similar as far as I could tell. Like the `list.__dict__` exposes non-indexed attributes that are publicly accessible. It is, like not directly related, but the question is also like JavaScript is very specific about that. How we handle data types is quite different to other programming languages which is the main reason why there’s not a lot of similar cases in other programming languages. + +RBR: So how is the proposed API looking like? Very simple. We have the non-index string property on the array . We have the target. We convert it to an object. We could theoretically check it is only an array instead. And to throw a TypeError in this case. I’m totally open to either/or. It is more like in the end we care about really knowing these extra properties, when you call that method you should be certain it is an array already. So I don’t worry about either case. In the end, we return only the enumerable ones and that’s pretty much it. + +RBR: So which, like, in this case if it is a type error, if we want to go that way, I would prefer a little bit over calling two objects. This is actually wrong in here, in the slide. And then we only handle the arrays. We have to discuss array-likes though, how we want to handle those, I believe array-like objects are always a little bit tricky to deal with. And placement idea was to simplify the object property count proposal and because it is a main major issue only for arrays and that’s why I am only put up, put together the array proposal. If this is accepted and deemed a good proposal, I would probably also go ahead and suggest something similar for TypedArrays. Because that’s the only other type where we can also do something similar and when you really care about the correctness that would, you know, fill up the space and can handle all of these cases altogether. + +RBR: The name, as I said, is long. But it seems to be the most clear for the intention. And if there’s an alternative I’m very open and curious to hear that. We could extend that proposal to also, again, have that enumerable argument. I believe, actually, since we are coming back to that, that is good—coincidence to discuss how we might generally handle three states if we want to like define something as, okay, if we have it, a situation where we care about three states do we want to normally deal with it as a string or something else like that? Because in these proposals, it’s coming up more often. I believe the would-be good to just think about it in a more generic fashion. + +RBR: Yeah. Then the length property is a little bit special due to it not—I mean, it’s a magic property pretty much in JavaScript. And it should probably not be returned there. If it is, non-enumerable or if it is not like, it is a bit questionable how to deal with lengths. I personally would normally not return it at all. That would be my suggestion, but that is an edge case we have to discuss if we want to go that route. + +RBR: An alternative I also thought about was to just extend object keys with options where we can do something similar, but it would again be a downside to the very explicit handling of its only an array and we have, again, the problem of what is the index looking like? Is it actually all index types? Or only a specific range? And therefore, I didn’t go that route. + +RBR: So yeah. I’m very curious to get input and seeing—if this makes sense or if it would be better to maybe keep that as a separate part on the `Object.propertyCount`. But I do believe it is actually necessary to get the properties at some point. Like we have code in Node that does that, for example. And there is a V8 method we use for it to overcome the performance overhead otherwise. + +CDA: Should we go to the queue? + +RBR: Uh-huh. + +MM: Yeah, so— + +CDA: Sorry, MM. Hang on. + +MM: Oh, I’m sorry. + +CDA: We have NRO first on the queue with long names are fine for these very specialized APIs. And then MM is next. + +MM: Okay. So, the problem that this would be solving, which I agree is a well-motivated problem. I would like to solve it. I mean, we face the problem. Is that for all of the varieties of own properties other than the indexed property to find out about them currently requires enumerating through the indexed properties before you can find out about the other ones. Now, one of the principles of good primitives for algorithms where you might be looking for many particular things is fast reject of the voluminous cases, but you don’t have to be accurate with regard to providing what the user wants if you can provide a conservative envelope that contains what the user wants and somewhat more, but not something voluminous. The reason I raise all of this is that I find it strange for this one that you’re limiting it to enumerable string-named properties. I would find this useful, in fact I would only find this useful it gave me a shortcut to find string and symbol named own properties whether enumerable or not. In other words, if this were phrased, for example, as an option bag on `Reflect.ownKeys` or some kind of variation on `Reflect.ownKeys`. I do not favor adding options bag to existing APIs. So I would rather it be some new method rather than an options bag on an existing one. But nevertheless, think of this as something like ownKeys that skips the indexed properties is what I was looking for. And for the same reason of conservative container I would not at all mind if it includes length and the most parsimonious explanation of what this is would end up including length. So that would be fine with me. + +RBR: So let me summarize, just because there were a couple of things. So, including 'length' is good. And we should go for either adding the enumerable property as suggested here as an addition on that right away. And ensure, like where we place it, it could be on `Reflect.ownKeys` instead. Okay. That might be an option. Like the performance is actually like, at least from what is used in Node at the moment, and due to using the internal V8 method, there is exactly that skip part. So like the performance difference is significant in this case for a bigger array. Because the properties are sorted internally in different buckets. So they are first sorted in, let me think— + +MM: Indexed properties come first. + +RBR: Yeah, they come first. Then the string properties, I believe, and then symbols are linked. + +MM: Yeah. The thing is with indexed properties coming first, there is no way, currently to find out about the other ones without paying the cost of enumerating through the indexed properties. + +RBR: Yes. + +MM: So, if there is simply a way to skip past the indexed properties are what I find to be a strong motivator. + +RBR: That is exactly it without the symbols, by the way, if we return those as well, I personally would be fine with that. The original idea around it was that it is currently possible to return the symbols separately. This is currently, like we just cannot differentiate index versus a NonIndexStringProperties currently. + +MM: Oh, right, there is an existing API for enumerating only the symbol-name properly. + +RBR: So you are `Object.getOwnSymbols` that returns all enumerable and non-enumerable symbols. + +MM: Okay. Including symbols is not a strong requirement here, you can do-it-yourself. But including nonenumerables is. + +RBR: Yep, uh-huh. + +MM: Okay. That’s all. + +CDA: NRO? + +NRO: Yeah. One more question about what you currently do. So I understand, you, for example, you use full in the log output because it shows the array and then the array the named properties. Do you currently use like for loop from zero to length to get the indexed properties and then after that get all of the keys and filter them? Or do you just use like a for loop over the object keys to go through the properties and the key index sorts the properties before in the right order. + +RBR: There are different algorithms that Node uses. So it depends. And like one use case is for the longing that you just spoke about. What is done is to actually like we do call the extra properties, like something like that pretty much, which returns only the non-index string properties. We iterate with the for loop over the lengths of the array. I assume, and now, we check while iterating if there is a hole or not. If there is no hole then everything’s good. And we just print it like that. If there is a hole, now, there’s another option in node we call object keys, so if you have a huge hole it is cheaper to call `Object.keys` on that. So that’s why then only over the keys is iterated and then it is cut off from that point on. So it is a bit weird. There are on edge cases, it would make the code much simpler if something like that existed with also the other APIs and to know if something isSparse or not. If something has these properties or not. It makes it very, much, much similar to deal with. + +RBR: Then in—in assert for example, to compare objects we also loop over the lengths and check for if there’s a hole or not, but that just doesn’t do anything else. And then afterwards we go over all of the NonIndexStringProperties which we receive through an internal V8 API that is not specced. + +NRO: Okay. Thank you. I understand why you need to do this. + +CDA: There is nothing else on the queue. + +RBR: So, thank you for feedback in this case. And like to summarize, last time I understood that like mostly the feedback was positive, I heard. And what would be requirement is pretty much to have, and this by—in the beginning already. I would still have a question about having a separate proposal for TypedArrays in this case because I believe it pretty much only applies to Arrays and TypedArrays, I would make it explicit on these two otherwise not. Or if we want to go for Reflect. I believe that would be a question from the committee on my side how we want to proceed in this case. We still have this three-state part to discuss. Like how we would want to deal with that. I could make a suggestion to make it like a string in this case for all three where we pretty much have like enumerability—none, for example, all, or only enumerable or something like that. I can think about more names if you want. Is that good? I take— + +CDA: Now there are a number of items on the queue. So go to JRL. + +JRL: Okay. I think this proposal depends on there being a good fast path inside of engines. I know V8 has this particular fast path you’re looking for. Do or engines maintain the keys for index and non-indexed separately? + +RBR: I would like, just without knowing, I do believe we have to have something like that because they are already sorted, right? + +JRL: Yeah, but that could be done as two loops through the keys or something like that. If they have specifically split the same way that V8 has done, this is a very simple API that gives you access to, to essentially an already maintained array that already exists, and effects a small discussion in matrix how we implement isSparse. + +CDA: KM? + +KM: I believe we do. We definitely have a property table that has all of the—we—yeah, distinguishes the indexed keys from the nonindexed keys I believe it also, I don’t know if it completely distinguishes the symbols though. But I kind of assume we do because of the way the enumeration works where it splits them. So but if—not, I mean, I guess, I guess—if not, I guess in theory this maybe we would be better as an iterable or something where we could just skip over those ones, but it probably, it is probably too early to decide that here, I think, we can decide that later for the reflection, I think, looking at it further. + +CDA: MM? + +MM: Yeah. The limiting all of these array-like things only to arrays, you know, postponing the proxy question until the later discussion. But the reason why I find that limit strange anyway is that early on in Ecma-5, I believe, we were very, very careful to ensure that all of these higher order array operations could all take a non-array as "this". And we specified, we specified what the behavior is so it was well-defined on non-arrays because we had at least back then, I would suspect we continue to have many array-like objects. Like the DOM Nodes object that are not arrays, but are nevertheless treated as arrays for most purposes. And that includes, I believe, if you enumerate their properties, the enumeration order, even for non-arrays, is still guaranteed to do indexed property first. Somebody should double check me on that. But if that’s the case, then I would prefer not to see these algorithms limited to arrays. + +RBR: It could also be Object.getNonIndexStringProperties. + +MM: I’m not concerned about where the API is. It can still be on array, like the array generic operations on are on array, but apply to non-arrays. The thing that I’m complaining about is when the algorithm says if it is not an array return false or something. I would prefer it to just state the algorithm so it didn’t care if the operand was an array. + +RBR: Right. How do we know the right index length in this case. The array we have two to the power of 32 minus one. Whereon TypedArrays we don’t. + +MM: Yes. I think, the fact that TypedArrays have a different criteria, TypedArrays and strings, together, have a different criteria with indexed properties then arrays and all other objects do creates a problem. But I think that the, I think the right way to address that problem is to simply ad mitt there are two different concepts of indexes in the language and we need to be clear when we’re talking about each one. Rather than trying to do it by type differentiating. Oh, if you’re going to do it by type differentiating, I think the answer on that is clear. All of the objects that are neither arrays nor TypedArrays nor strings all use the same definition as arrays do. + +CDA: Richard? + +RGN: Yeah we raised that topic. Actually, I think it was me that raised that topic last meeting. I believe the result of the discussion is that when you’re looking at properties generically, only the array indexed properties have special treatment. Just like is currently the case when you’re enumerating them. + +MM: So you’re saying that for an array-like the indexed properties must be enumerated first would follow the TypedArray definition of index property? + +RGN: Right. It would vary somewhat based on where the method appears. If you’re in an object context or an array context, as indicated by where you got the function from, then it would only be the array index properties that are in a special category. If we had something specific to TypedArrays then that would be the only thing that uses the special definition for TypedArray indexes. + +MM: So, clearly everything has a defined enumeration order. And that enumeration order says index properties first. So for all values in the language it must, there must be some type-based determination of what the appropriate definition of index properties is. And this could be generalized by simply adopting that same definition on a per-type basis. + +RGN: Well for property enumeration it is not type specific when you’re enumerating. It is only the array index definition that is special. Even if you’re enumerating the properties of a TypedArray, what you will see is the array index ones come first in numeric order. And then anything that exceeds that in lexicographic order. + +MM: So indexes on TypedArrays might appear after non-index properties? + +RGN: Yeah if you’re enumerating them with an Object or Reflect function. + +MM: Since we’re only asking for stage one on this, I’m going to postpone diving into this, I’m glad I raised it. + +CDA: NRO? + +NRO: Again, I would like this option for the propertyCount proposal, but I—think that should be done after this proposal goes to Stage 1. I think we would, the use case here is to in general get the properties that are non-array-index properties and then exactly the future should be discussed later. + +RBR: Sounds good to me. + +CDA: GCL? + +GCL: Hello? There we go. So, I wasn’t sure to say this for this proposal or on the next one, but I’m just kind of thinking like there’s this really nice API in V8 called key iterator where you just tell it like whether you want own properties or prototype properties. Whether you want strings. Whether you want indexes, whether you want symbols, whether you want numbers, if you have numbers, whether you want them to deal stringified or not. And then it gives you all of the properties in the correct order based on that. I feel like, I mean, that’s a lot. But if, I don’t know if maybe like it we’re adding all of these new methods like whether something like that might be more helpful, especially if we’re going to come back to this in the future for any reason. I don’t know. + +RBR: So I personally would be fine with that. I love that. I believe that is the API that we’re using internally in Node to receive that part. And it’s very powerful as such. Right? Because you have all of the differentiations imaginable. In this case, the idea was actually to keep it very limited because often like, that would be a much bigger API. And would potentially add a note, depending on the implementations in the different engines just also cause more work so to speak. But I’m open for that, if that’s, if that’s the way to go, I believe we could. Because it would solve the same problem space. + +CDA: NRO? + +NRO: I think it is good to represent the proposals, all of the things we’re presenting have separate use cases, and yes they are in the same space of more, fine grain, reflection, other objects, but like even their different use case it might be possible that some of this end up in part of the language. Others don’t. And it’s possible for different use cases it is good to have different separate APIs, depending on what the APIs end up looking like. So deciding now whether it is worth doing or not. I think we should go through them one-by-one as their proposals. And in the future, we have one API that does everything, but it is too early to say that. + +RBR: So I believe there was very valuable input, like I would definitely try to look at the option, like how to name that. And for the three states. And to make sure that it is there by default for now. Gus, if you wouldn’t mind, we could also just meet afterwards maybe and discuss potentially about the other API. I’m not sure if you would be interested in that. And like I would think about where to place this to make it more generic as such. I believe the most simple way to put it on Object, that is my current approach that I take along from the current discussion, that is fine. After the last one, I hope, because it felt like the overall idea is still that we want to proceed with something like that. So even though there is still should changes ongoing, I hope we can get it to stage one after the last proposal. + +CDA: All right. Thank you. We will move onto the final topic. If that’s all right. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +(see end of day) + +## `Object.getOwnPropertySymbols` options for Stage 1 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/BridgeAR/object-get-own-property-symbols-options) +* [slides](https://github.com/BridgeAR/object-get-own-property-symbols-options/blob/main/slides/07.2025%20-%20proposal-slides.pdf) + +RBR: One second. Whoops. Yep. All right. So, like actually, again, like after looking into these use cases with the other APIs, it was very frequent or for me relatively frequent thing I saw is that when someone uses `Object.getOwnPropertySymbols` in about 90% of the time they don’t want to have the nonenumerable ones, they only care about the enumerable ones. They want to filter the nonenumerable ones, which causes an extra overhead that we now have to iterate over the whole area where we already allocated additional memory and CPU cycles on getting those and now we just create a new array where we remove these afterwards. + +RBR: So I would like to add an option and to filter these by default so we don’t even have to do that work upfront. Like where it is, relatively frequent is in any serializers or loggers. And as sometimes libraries attach like some properties with symbols to hide implementation details, because most users only use object keys and when they don’t use private like classes with private property, in this case, and they often use symbol-keyed properties to hide things away. In this case, they often also do that in a non-enumerable way, because they are not logged because the serializers filter them. Which is good. + +RBR: So right now we do have that over head of calling `Object.getOwnPropertySymbols` and addition to filtering and then checking for the property descriptor if it is actually enumerable or not. Which is like one way of doing it. There are—actually, it is the only one I can think about for this one. And it is extra code and there is no way of optimizing it currently. And I would like to just be able to do that. Like the current APIs that we have, we spoken about, like that are sometimes a bit weird, I do feel that. Because it is inconsistent for me how they are handling properties. Like `Object.keys` is only returning enumerable string and keys and `Reflect.ownKeys` returns everything, and getOwnProperlyNames returns the string names, and getOwnSymbols returns symbols enumerable and nonenumerable ones, and most cases people care about nonenumerable ones especially for symbols there is no API to handle symbols at the moment for only enumerable ones. We do have the object keys for enumerable strings so that’s good. That is why I believe there is need for something else there. + +RBR: But like when I looked for it, like Lodash, node—and many more, filter the enumerable ones at the moment when checking for symbols. Like I have a couple of links here and I can show code if we want. + +RBR: And like the addition doesn’t break anything because we just add an option. There is no current options so that wouldn't hurt. Again, we have the three states we have to discuss to figure out how to deal with them. Because I believe it should be three states from the initial discussion, I feel like we should always have those. And especially with adding more of these options with a different proposal, it would make sense to have it consistent across all of these. It is quite straight forward in the usage, I believe, like you just define it as false. And enumerability is false or true and define it and check it afterwards and get to the expected properties. + +RBR: Like the specification is in here, I don’t believe we have to go through it right now it is more of the idea if we want to push for that or not. I did think about alternatives. So, we could, of course, have individual methods that only get enumerable ones instead. But like as discussed earlier, I believe more methods make it actually more difficult to find these. And like an option is normally simpler, and these will autocomplete and users will definitely find the options, I’m quite confident, the option is much easier to find than an extra new method that is added to the spec. Like filter could be added to `Reflect.ownKeys`. We could do that. It already came up in the earlier proposal. So that’s possibility. Or we keep everything, but I don’t believe that is quite good. + +RBR: Yeah. Polyfillable. We don’t have any side effects. Etc., so I don’t believe there is any downside. Like should we add the "all"? Similar to the others? But in this case I believe it is mostly because the current default is "all". It is a little bit different, because all of the APIs have different default or behavior. Like should we add "all" or should we not? I do believe yes for consistency reasons. We could add additional features like configurable and writable, but I don’t think we need it right now, we would have to have a strong reason to add it, finding more use cases. Mostly it is about enumerable in the first place, that addresses a big performance problem in the ecosystem right now. + +RBR: So yeah, I’m pleased to hear your comments. And to discuss it more. Like I do have to publish a polyfill for these maybe. And to see how things are going. + +CDA: KG? + +KG: Yeah. I’m fine with the solution space for stage one. I feel extremely strongly that the proposed solution is wrong. I would not accept this method going forward to be anything other than a new separate method, not an options to the options bag. We have `Reflect.ownKeys`, we have `Object.keys`, we have `Object.getOwnPropertyNames` and `Object.getOwnPropertySymbols`, these are all other points in the matrix. If you’re going to fill out the matrix the way to do it is with a new method, not an options bag. If the proposal is filling out this hole where there's other methods which do something very similar it needs to be its own method. Now, if we’re talking about adding other kinds of variability here, about like, you know, if you want to filter only the writable properties or only the configurable properties, whatever, I’m okay doing those as an options bag, they don’t have existing methods that are only varying different parts of this matrix. But we have `Object.keys`, `Reflect.ownKeys`, getOwnPropertyNames, getOwnPropertySymbols—the only acceptable way to solve this is with a separate method. + +RBR: I have a question. Why is there a—strong reason against, for example, we could say we want to add the option for enumerable on all of these. + +KG: I mean `Object.keys` versus `Object.getOwnPropertyNames` is already the enumerability. We already answered how do we get enumerable or not: with a separate method. `Object.keys` versus `Object.getOwnPropertyNames`, those are exactly the same except for one filters by enumerability. So if you want to add anything that is exactly like `Object.getOwnPropertySymbols`, but it filters by enumerability, this is literally a two-by-two matrix for string versus symbol, and enumerable versus non-enumerable. There is one hole, and the way to fill out the hole is to match the other three in the matrix. + +CDA: KM? + +KM: I guess I agree with what KG said there a method for consistency, but if you care deeply about performance on this, you don’t want an options bag, because that option has to be allocated every time you call this. In theory, in JSC and probably the other engines in their most optimizing compilers they might sync the optimization, but you have to be deep, deep, deep inside of the optimizing pipe lines of the engines and code has to be super, super hot before you get the efficient behavior. So you probably want a separate method. Icing on the cake, I suppose. + +MM: So before I ask my question, I want to ask a meta question. I have a lot to say as I mentioned for all four proposals combined. Is it was appropriate for me to start those discussions now? Or should, or do we have, or do we want to do a separate demarcation of questions specifically on this proposal versus the set of four proposals together? With nobody saying anything, I’m just going to go ahead and jump in to asking things that might type for proposals and some things that are explicitly about the proposals together. + +MM: So the, my general, when I look over all of this and look over even the existing API, I think we got too much existing API and I’m very shy about adding yet more API with regard to enumerating properties. I find myself either using the narrowest `Object.keys` or using the broadest, `Reflect.ownKeys`. And what it seems like we’re trying to do, we have been sort of partially trying to do, if we want to read rationalization into the existing API is to invent more of a query language to anticipate accurately what other, you know, what various code might specifically want to enumerate. And the thing I like about ownKeys is that I don’t have to learn or express myself in some query language that may or may not, and usually does not, anticipate what I exactly want to ask about. I just enumerate something that is guaranteed to contain what I’m asking about and then do post filtering. + +MM: The one change that I have been hearing about that I find well motivating is when the things I would be thought I would need to filter out are voluminous and therefore it’s very expensive to filter them out by manually enumerating. Like, and this, this is a place where this stands out. And to first approximation the only case where that stands out is index properties. + +MM: The other thing that the set of proposals, specifically the first proposal is about is also trying to give you counts of things rather than having to get the enumeration and then it asks for the count of the enumeration. And once again, that is motivating when the enumeration is large. It is also strongly motivating when the question you’re asking is a question you would add many times, for example in a tight loop, which I generally have not seen on length queries. So my preference for all of these is, is opposite of NRO’s, which is that they all be put together into one stage one proposal with the emphasis being on algorithmic speed up for user code that is willing to do some post filtering, but not willing to do a lot of it. And only when we expect that the skipped user filtering is—where the skipping can be done in O(1), or least something substantiality less than O(N) across the high-speed engines, because if it is O(N) to do the skipping we didn’t gain that much over doing the filtering ourselves. + +MM: So, so that’s the first big point, and I think the biggest point, that applies across all of these proposals. + +RBR: So, like one part I was a bit uncertain about that you spoke about that was—in particular for example, here. Is when you get the symbols you always have to call like, you have to check for the enumerability while filtering, it doesn’t matter if you have enumerable or non-enumerable ones because you have to do the check one way or the other that is the expensive part. And it will be expensive for a few properties. + +MM: So for me, the question I would be asking, whether it is worth adding API service to solve the problem. How often do users encounter objects with a massive number of symbol named properties? I have certainly never come across such a thing. + +RBR: Depends on how many they have hidden and like how many properties. So for example, in Node, it is a very common pattern, instead of having a class with private properties, we have symbols to hide things way historically. And like also depending on how the code is written and like it was simpler to do it that way. So— + +MM: So what, what is the maximum symbol count on a single object that you have encountered in practice? + +RBR: It is probably roughly, I don’t know, 10 to 15. + +MM: Okay. Not 10 to the 15th. Just 10 or 15? + +RBR: Yes. + +MM: So I would prefer not to add API service when the user can just do a post filtering and what they are interested in. Because you’re not going to anticipate all of the different distinctions that users are interested in. And you know, throwing away 14 out of 15 is just not a terrible cost unless that itself is done in a loop which I haven’t seen these things themselves be nested within loops. + +RBR: Is more. Yeah, yeah. + +MM: It is more on the outside of the loop. They are not on the inside. So I would just, I would just not solve the problem. You know, just leaving it to users to do their own enumeration and post filtering, I think doesn’t try to anticipate what the particular distinctions are that a user might want. And then the exception, the motivated exceptions are the ones where both there’s a voluminous number that we expect users to encounter in practice often enough. And that the API we expect would provide a significant speedup over post user—user post filtering across the high-speed engines. + +RBR: So that part would be faster because getting the enumerability is actually expensive. The API call itself. + +KM: Is that expensive because engines just don’t optimize it. That is something engines can easily do, it is never shown up as hot so they never bothered. So like which probably means that either our tests are measuring the wrong thing or you are doing something unusual. + +RBR: So it is indeed, I believe, not that optimized in V8. For example. + +MM: Right. But, to expand on KM’s question, I’m sorry to interrupt, but to expend on KM’s question. Essentially there is two different asks of the engine that is in this conversation. One is for the ones where we think we really needed to be faster, we could either suggest that engines provide a new API or we could just suggest that they fix the underlying thing that needs to be faster if it can be fixed in a local and simple way. + +KM: ZTZ? + +ZTZ: Just a quick thought. When MM said there’s a lot of APIs already. Maybe instead of multiplying all of the possibilities into more functions we could consider the opposite and having two methods with an options bag that covers absolutely everything that would be symbols included or not included as well as enumerability and so on. Which also means that with the right combination of options in the options bag, I think, `Object.count` on an array host would be different than array with `host.length` which also covers detecting parse arrays. I don’t know if this makes sense, just a fresh thought. And since this is stage one, maybe this is going to be useful. + +RBR: That was the original proposal. + +ZTZ: Okay. I’m new here. + +RBR: The proposal last time. + +CDA: WH? + +WH: The proposed doesn’t work on array-like. Just because there are four numbered properties and the length is four, doesn’t mean you don’t have a hole. + +CDA: MM? + +MM: Yeah. So now, I want to bring up the issue of proxies. So there are several issues related to proxies. And for stage one, I just, it is adequate for me to just mention them. Certainly I, nothing having to do with proxies is a gating issue for entering stage one. The algorithms, which you can have different particular algorithms written down but for non-proxies or non-host exotic objects are not observably different, but for which proxies make things observable that are otherwise not observable. And the algorithms that are written down, I think are not really well anticipating proxies. So for example, what happens when ownPropertyKeys tells you that something exists and you turn right around and do a getOwnProperty on it and getOwnProperty says it doesn’t exist. + +JHD: MM can I jump in real quick, we followed what `Object.getOwnPropertySymbols` which basically if that happens we skip that property. If you’re generally just saying be aware this can happen on exotic objects including proxies that will absolutely be accounted for before seeking stage 2 or 2.7. + +MM: Okay. Great. + +JHD: If there is a specific type of behavior you except or want to see, of course, please file an issue or let us know. We are aware it needs to be handled. + +MM: Okay. That’s great. That is my major concern about proxies. + +MM: The other issue is any time we’re trying to call out arrays as special, that needs to apply to proxies on arrays as well. Because the existing `Array.isArray`, has the taxonomy where proxies of array are considered arrays that is used for example, by `JSON.stringify` that uses the same criteria that `Array.isArray` that determines what printing algorithm it is going to use. + +RBR: I’m not totally certain to what proposal that applies in this case. + +MM: So, mostly to the `Array.isSparse` where we talked about what does it do on non-array objects. If we generalize it so that it just applies to all objects and then it is type-specific what definition of index it’s using, then there’s, there’s much less of an issue, because then obviously it would include proxies on arrays because it includes proxies in general. There would still be the issue that the definition, if, if—if—regular objects use a different definition of index then arrays use, then there is only the, you know, comparatively minor issue just being careful to ensure that proxy on array uses the indexed, I’m sorry. It is— + +JHD: Objects and arrays use the same concept of index. + +RBR: Yep. + +JHD: As do proxies. + +MM: `Array.isArray` and `Array.getNonIndexProperties`. It is more `Array.getNonIndexProperties`. Out of all four proposals that is the one I find most well-motivated. + +RBR: Yeah, I mean, was the nonIndexStringProperties, I believe, the, like, it felt like it should be more generic as such we could just apply the same for all objects and then like just collect what type is it in internally, so if it is a proxy or an array, then the array index would apply. If it is a proxy on something else, than the other index applies. + +MM: No, no. I’m sorry for having to—surface this nonuniformity in the APIs that, that I designed, but if it is not an array then even if its on a TypedArray, the proxy on a TypedArray should just act for purposes of the API as a non-array. We only surfaced through the proxy interface whether it is an array or not, I’m sorry, whether it is an array or a function or other. And that’s, that’s the extent to which you can directly see— + +RBR: But I believe that is fine. Because the index type in this case is fine. + +MM: Yeah. + +RBR: So that’s okay. I believe. I don’t see the issue there. + +MM: Okay. Good. So, so that was it for my postponed questions. And my big one is simply: What I, what I do and do not find motivating with regards to justifying new API services and my desire to see all of these considered together across all four proposals to address with smaller total API. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +* List +* of +* things + +CDA: All right. We are just about at time. And I’m also, we are almost 10 minutes into what would normally be the lunch break. But we have no other topics for the afternoon. I know some people have very strong preferences on not breaking for lunch and not returning later. So to that end, it seems like this would be the time to formally request consensus for advancement on your proposals. + +RBR: Yep. So with the first one, it was, I’m a little bit uncertain now like if it should be really one or not. Because there were very, like different opinions now of the committee. And actually, even partially going back to the former proposal was requested. So in this case, like I do believe the overall idea of the first one was `Object.propertyCount`. I don’t know if there is anything standing against stage two. Like, everything is in there, I believe for that. With the others, I believe with `Array.isSparse` it felt like there was a lot of discussion around this topic. I’m not sure if we want to progress that to one yet or, like that was my feeling. I did see a couple of aspects to, think about it stronger. Even though it like—maybe just getting input on how to address it differently. And like the problem space. + +RBR: With like `Object.getOwnPropertySymbols` to provide an API or not. I understand that it could, we could argue like it should be optimized in the engine so that, because we don’t have so many properties that are symbols. And that is an aspect I can see. And as such I can also imagine not pushing that forward from my side. Like to first get that in place before adding a new API. I would be fine with that. + +RBR: And the other one was, which one was it? The `Array.getNonIndexStringProperties` how is the called? Get—`Array.getNonIndexStringProperties`. So this one, I felt like the committee was in favor pretty much of the idea. So it would be great to progress that to stage one in my perspective. + +CDA: All right. So, as WH’s comment on the queue points out, we will be considering these individually. MM also has a comment on the queue that he didn’t want to speak to, but I’m going to push him the speak to it anyway on the object to stage two because I want to look at them together. MM, can you elaborate on this? + +MM: Yeah. I think that all four together are addressing very related issues. The fact that there are four means that the first one by itself doesn’t cover what we would like altogether to cover. And—and—and—together they're too much API service for the functionality that they’re providing. So yeah, I object to stage two for property count by itself. I would prefer whether they are put into one proposal or not, to accept all of these into stage one or, you know, or whatever subset there is general agreement for stage one. And then to consider the overall problem space across them for me as if they were one proposal because I would like minimum API for the totality that is well motivated. + +CDA: Okay. No, I understand your perspective. Where I’m a little bit confused, I guess, and JHD and RBR can correct me here, this one originally one proposal and the committee requested explicitly— + +MM: Oh, is that right? + +JHD: Yes, that’s right. + +MM: Okay. Okay. Thanks for reminding me. + +CDA: We have to pick which. + +MM: You’re exactly right. I have been in a similar frustrating situations. That’s a very good point. I’m still not ready, given the issues I’d like to discuss that are touched on by their proposals, we don’t have to unify them, but I would like to hold `Object.propertyCount` from stage two until we understand better the issues across the or ones. + +CDA: Understood. + +RBR: If I may add one point to that, because `Object.propertyCount` is a little bit distinct from `Object.getOwnPropertySymbols` and the option as well as `Array.isSparse`, and `Array.getNonIndexStringProperties` because they return the actual properties. While property count is actually really only about "give me the count". I believe that is a, like all of these have a very distinct value from where they are currently standing. I believe just only does API on its own will address a major performance issue that the language currently is facing. Like, `Object.keys.length` is like a major thing in any code base. + +MM: No. I understand that. I do appreciate it. The issue is that your options bag is basically designing a query language in anticipation of what queries the user might want to ask. And what that distinctions that query language should or should not be able to make, I think, should be co-designed with the query—the queries that we want to support with regard to enumeration. + +CDA: All right. So noting Mark’s objection and then KG also objects to stage two, KG, do you want to speak to that. + +KG: I have not been convinced of the utility of all of the various options `Object.propertyCount`. I’m convinced of `Object.propertyCount` without the other options, but not prepared for this to go to stage two with the other things in it. I would be happy to continue discussing that. If you want to open an issue that gives me some examples of times when people need counts for all of the various permutations, I can certainly be convinced of the utility. But I’m not right now convinced of the utility. + +CDA: And MF is on the queue, plus one for KG’s comments. We are not getting stage two for `Object.propertyCount` today. Let’s move to the next one in the list. `Array.isSparse`. Do we have support for stage one for Array.isSparse? + +JHD: A few of the queue items we’re giving plus one for stage one for all of them. So there is already been support for stage one for all of them. Just making sure. + +CDA: Yeah. So MM expressed support for stage one for everything. Yeah. That is true. But we’re going to go down individually and just make sure we are clear for the record. WH? + +WH: Which item are we— + +CDA: We are on `Array.isSparse` for stage one. + +WH: I’m unconvinced on this partly based on the discussion in Matrix where it became clear that these will not be O(1) algorithms. If we add them to the language, they will become an attractive nuisance, and people will call them a lot more than they should. And they will slow down programs. So I’m unconvinced that there is any solution to this space which would not have adverse consequences on performance. + +CDA: Okay. Just to be clear, is your concern a blocking concern? For stage one? + +WH: Ah—I mean, I—ah— + +CDA: KM is also. + +KM: I need to be convinced of this more. So if you’re not going to say it, WH, I can do it for you. + +CDA: A reminder stage one is just about dedicating time to investigate if there is a solution to the problem space, not deciding on a solution. + +KM: The way the problem space is defined is detecting if an array is sparse or not. And that current problem statement does not seem, it seems like interactable in like to solve for depending on the engine implementation and it is very engine implementation-specific it is not sort of not something I would ever want to reveal in like those kind of implementation details. So, I guess, like I would need to be convinced that somehow the solutions, any solution in that space is even possible like before that. I mean, like—I guess you could say, like I want to solve the halting problem and like, I guess, you could say that is a valid thing we want to explore, but without any, I can’t imagine any viable thing I would approve in that space. I don’t know if that is a sufficient argument to block stage one. + +WH: Thank you, KM, you made the point I was trying to make. Given input provided by the implementors, I see no evidence at this point that there exists a solution to the problem space as defined. + +CDA: Okay. Understood. Yeah, KM, we don’t qualify whether, you know, objections are valid or not. So—we’re not going to go down that route. So plus one to KM’s comments in the queue. There is support for stage one from ZTZ. And OFR is opposed to any speed guarantee or mention. And MAH is on the queue to confirm a problem exists first. So this sounds like a pretty clear signal from the committee that we are not going to advance `Array.isSparse` today. But just a reminder that, you know, a no is not necessarily final. And if you came back to the committee later with convincing problem statement or the benchmarks, etc., that MF mentioned, there could be a path forward. + +CDA: Let’s go to the next one. `Array.getNonIndexStringProperties` for stage one. So we approve as mentioned previously. MM supported this for stage one already. And USA also supports for stage one. + +USA: Yeah. This seems like it is—sorry. I can just speak to that a little bit. It is adding new functionality and its named in a nice way. So we’re convinced. + +CDA: Yep. USA. Strong support from (?) on this one. And ZTZ. So a lot of support. Any objections for stage one on this. Not hearing anything. Nothing on the queue. So congratulations. You have stage one for this one. It is a mouthful, that’s why I’m not saying the whole thing out loud again. And then finally, `Object.getOwnPropertySymbols` options. Do we have any support for stage one here. We have support from MM. Any other voices of support for this? + +KG: I don’t like the name. But I’m happy with the problem space. + +CDA: Okay. So, you have support from KG. Any objections to stage one. MF also supports stage one. Hopes to see it as `Object.symbols` in stage two. MF, did you want to speak? + +MF: No. + +CDA: Okay. I already called for objections. Not seeing anything on the queue. Okay. I think that we will say that you have Stage 1 for this proposal as well. Okay. Hang on, ZTZ has a comment, not convinced it will be used in the wild, but support stage one, fair enough. All right. Congratulations, you have stage one. It gets a bit of a mixed bag for the efforts today. But congratulations on the ones that advanced. From 2922ba1ef35dfd0b2766ea3d3838740d860279ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Wed, 20 Aug 2025 10:40:47 +0200 Subject: [PATCH 2/3] Add Intl Era and month code summary from presenter --- meetings/2025-07/july-30.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/meetings/2025-07/july-30.md b/meetings/2025-07/july-30.md index c334ed4..97124bf 100644 --- a/meetings/2025-07/july-30.md +++ b/meetings/2025-07/july-30.md @@ -106,14 +106,13 @@ USA: Good. Thank you so much, DLM, and others for contributing. I’ll see you l ### Speaker's Summary of Key Points -* The Intl Era and Month Code proposal was presented to the committee for advancement to Stage 2.7. -* A brief description of the proposal was prescribed including the scope and rationale. -* There was discussion regarding PR69, which changes the spec text to transform some sorts of the spec from prose to algorithmic spec steps. -* Both of the assigned stage 2.7 reviewers, SFC and EAO, have signed off on the spec. +* The intl-era-monthCode proposal was presented for stage 2.7 +* Prior to the presentation, a number of months were spent by TG2 to discuss some of the details of this proposal in detail and the spec text had been updated to reflect those changes as well as their results upstream in CLDR +* The champions presented the changes briefly and explained the scope as well as the limitations in terms of avoiding overspecifying certain details ### Conclusion -The proposal conditionally reached Stage 2.7, dependent upon final editorial approval by the ECMA-402 editors. +* The champions requested stage advancement and consensus was reached after a brief discussion around the last remaining editorial change ## Module Import Hook and new Global for Stage 1 From 6e3e0aaad700a212cb92da6be5a5b409c8807805 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Wed, 20 Aug 2025 14:53:52 +0200 Subject: [PATCH 3/3] Add missing names to each day's table --- meetings/2025-07/july-28.md | 18 +++++++++++++----- meetings/2025-07/july-29.md | 17 +++++++++++++++-- meetings/2025-07/july-30.md | 19 ++++++++++++++++--- meetings/2025-07/july-31.md | 14 +++++++++++++- 4 files changed, 57 insertions(+), 11 deletions(-) diff --git a/meetings/2025-07/july-28.md b/meetings/2025-07/july-28.md index c79a281..d24cd60 100644 --- a/meetings/2025-07/july-28.md +++ b/meetings/2025-07/july-28.md @@ -10,7 +10,6 @@ Day One—28 July 2025 | Dmitry Makhnev | DJM | JetBrains | | Waldemar Horwat | WH | Invited Expert | | Guy Bedford | GB | Cloudflare | -| Chris de Almeida | CDA | IBM | | Daniel Minor | DLM | Mozilla | | Zbyszek Tenerowicz | ZTZ | Consensys | | Jordan Harband | JHD | HeroDevs | @@ -29,8 +28,17 @@ Day One—28 July 2025 | Tab Atkins-Bittner | TAB | Google | | Istvan Sebestyen | IS | Ecma | | Daniel Rosenwasser | DRR | Microsoft | -| Michael Ficarra | MF | F5 | | Andreu Botella | ABO | Igalia | +| Chris de Almeida | CDA | IBM | +| Chengzhong Wu | CZW | Bloomberg | +| Justin Ridgewell | JRL | Google | +| Kevin Gibbons | KG | F5 | +| Mathieu Hofman | MAH | Agoric | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Rob Palmer | RPR | Bloomberg | +| Stephen Hicks | SHS | Google | +| Ujjwal Sharma | USA | Igalia | ## Opening & Welcome @@ -198,7 +206,7 @@ CDA: Okay. I think we formally have to do that through plenary. Right? We’re c RPR: Given this was a surprise, I guess—does—maybe—ask, does anyone want more time to think about whether MF is okay as convenor for this task group? Okay. And, in which case, I think we could probably just go ahead right now and ask for—oh, go ahead. SMH. -SMH: Yes, I just want to say, if you have no opposition and if the TG is proposing a convener is accepted by the technical committee as a whole and through the chairs, and you can do in this short notice, you should be fine. +SHN: Yes, I just want to say, if you have no opposition and if the TG is proposing a convener is accepted by the technical committee as a whole and through the chairs, and you can do in this short notice, you should be fine. CDA: I support plus one for MF for joining the conveners group of TG5. @@ -1084,11 +1092,11 @@ MM: Great. Thank you. CDA: Next? -CWU: Yeah. I think this is worth mentioning that the HTML integration is not going to be ended up in the proposal specification text. But I think it is worth to have it together to be reviewed with the proposal spec text together to be advancing to 2.7. Because, I mean, that would helpful to access use cases with the proposal and on the web integration. +CZW: Yeah. I think this is worth mentioning that the HTML integration is not going to be ended up in the proposal specification text. But I think it is worth to have it together to be reviewed with the proposal spec text together to be advancing to 2.7. Because, I mean, that would helpful to access use cases with the proposal and on the web integration. ABO: I guess I should point out, it is possible that the spec has to be updated to add things that the web specs can use from the proposal. It would not be changing any of the behavior in the, on the Ecma-262 side of things. It is just adding algorithms for the web specs. -CWU: Yeah. Sure. +CZW: Yeah. Sure. CDA: All right. Nothing else on the queue. diff --git a/meetings/2025-07/july-29.md b/meetings/2025-07/july-29.md index 0a5c201..bb42fbf 100644 --- a/meetings/2025-07/july-29.md +++ b/meetings/2025-07/july-29.md @@ -7,7 +7,6 @@ Day Two—29 July 2025 | Name | Abbreviation | Organization | |------------------------|--------------|--------------------| | Waldemar Horwat | WH | Invited Expert | -| Chris de Almeida | CDA | IBM | | Jesse Alama | JMN | Igalia | | Dmitry Makhnev | DJM | JetBrains | | Michael Saboff | MLS | Observer | @@ -25,6 +24,20 @@ Day Two—29 July 2025 | Sergey Rubanov | SRV | Invited Expert | | Daniel Rosenwasser | DRR | Microsoft | | Rezvan Mahdavi Hezaveh | RMH | Google | +| Andreu Botella | ABO | Igalia | +| Chris de Almeida | CDA | IBM | +| Justin Ridgewell | JRL | Google | +| James Snell | JSL | Cloudflare | +| Kevin Gibbons | KG | F5 | +| Keith Miller | KM | Apple Inc. | +| Matthew Gaudet | MAG | Mozilla | +| Mathieu Hofman | MAH | Agoric | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Nicolò Ribaudo | NRO | Igalia | +| Richard Gibson | RGN | Agoric | +| Stephen Hicks | SHS | Google | +| Ujjwal Sharma | USA | Igalia | ## How to make thenables safer? @@ -463,7 +476,7 @@ RGN: Yeah, I would be happy with that outcome. USA: Okay. -JDH (on queue):, plus one for stage three conditional for test approval. +JHD (on queue):, plus one for stage three conditional for test approval. KM: Yeah, I don’t think we have a problem with stage three, the feedback I got from our DOM folks is that we probably won’t ship until any kinks and everything else, and everything is worked out on the DOM integration side of this. I don’t know fully know what that means in hindsight I should have asked clarification before, but if there is something there from shipping, but we will probably implement the feature before then. diff --git a/meetings/2025-07/july-30.md b/meetings/2025-07/july-30.md index 97124bf..0600314 100644 --- a/meetings/2025-07/july-30.md +++ b/meetings/2025-07/july-30.md @@ -8,7 +8,6 @@ Day Three—30 July 2025 |------------------------|--------------|--------------------| | Dmitry Makhnev | DJM | JetBrains | | Waldemar Horwat | WH | Invited Expert | -| Chris de Almeida | CDA | IBM | | Jesse Alama | JMN | Igalia | | Daniel Minor | DLM | Mozilla | | Samina Husain | SHN | Ecma International | @@ -24,6 +23,20 @@ Day Three—30 July 2025 | Daniel Rosenwasser | DRR | Microsoft | | Rezvan Mahdavi Hezaveh | RMH | Google | | Kris Kowal | KKL | Agoric | +| Andreu Botella | ABO | Igalia | +| Chris de Almeida | CDA | IBM | +| Gus Caplan | GCL | Deno | +| Justin Ridgewell | JRL | Google | +| James Snell | JSL | Cloudflare | +| Kevin Gibbons | KG | F5 | +| Keith Miller | KM | Apple Inc. | +| Matthew Gaudet | MAG | Mozilla | +| Mathieu Hofman | MAH | Agoric | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Nicolò Ribaudo | NRO | Igalia | +| Richard Gibson | RGN | Agoric | +| Steven Salat | STY | Vercel | ## Opening & Welcome @@ -320,11 +333,11 @@ MM: Can we just ask if there are any objections to stage one? DLM: Fair enough. Any objections to stage one? -JLS: Not a strong objection, but I would like to see isolation(?) around the wording of the problem a bit more. It just troubling me that is too kind of open-ended. +JSL: Not a strong objection, but I would like to see isolation(?) around the wording of the problem a bit more. It just troubling me that is too kind of open-ended. KKL: I agree. Let’s refine that out of band to specifically mean the module map, the specific things we wish to isolate that are not intrinsics. -JLS: I just want to be clear, I’m not wording it as an objection for stage one, but if we can get that cleared during the plenary. +JSL: I just want to be clear, I’m not wording it as an objection for stage one, but if we can get that cleared during the plenary. DLM: Yeah, why don't we come back with a problem statement? We need to move on with the next topic. We will bring the problem statement back and have a discussion later on. I will capture the queue and we can move onto the next topic. diff --git a/meetings/2025-07/july-31.md b/meetings/2025-07/july-31.md index e0f7700..ce40c00 100644 --- a/meetings/2025-07/july-31.md +++ b/meetings/2025-07/july-31.md @@ -13,12 +13,24 @@ Day Four—31 July 2025 | Istvan Sebestyen | IS | Ecma | | Jordan Harband | JHD | HeroDevs | | Zbyszek Tenerowicz | ZTZ | Consensys | -| Chris de Almeida | CDA | IBM | | Daniel Rosenwasser | DRR | Microsoft | | Eemeli Aro | EAO | Mozilla | | Samina Husain | SHN | Ecma International | | Aki Rose Braun | AKI | Ecma International | | Olivier Flückiger | OFR | Google | +| Chris de Almeida | CDA | IBM | +| Gus Caplan | GCL | Deno | +| Justin Ridgewell | JRL | Google | +| Kevin Gibbons | KG | F5 | +| Keith Miller | KM | Apple Inc. | +| Mathieu Hofman | MAH | Agoric | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Nicolò Ribaudo | NRO | Igalia | +| Rob Palmer | RPR | Bloomberg | +| Richard Gibson | RGN | Agoric | +| Stephen Hicks | SHS | Google | +| Ujjwal Sharma | USA | Igalia | ## Opening & Welcome