Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alternate implementation for #2801 #2888

Closed
wants to merge 41 commits into from
Closed

Conversation

dai-shi
Copy link
Member

@dai-shi dai-shi commented Dec 23, 2024

alternate implementation for #2801

Goals:

  • Do not expose atomState
  • Do not expose internal listeners
  • Implement onInit as minimal as possible

Remaining:

  • Two FIXME comments in effect.test.ts
  • I'm not sure if there are some missing capabilities for atomSyncEffect

Copy link

vercel bot commented Dec 23, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
jotai ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 1, 2025 5:28am

@dmaskasky
Copy link
Collaborator

dmaskasky commented Dec 30, 2024

I would like to propose a few changes here to guarantee that effect runs after recomputeDependent and before listeners and mount hooks. I am not 100% with any of this yet. We may also need new tests specifically to support the behavior change to synchronous operation.

  1. We may need a ScheduleEffect function. This function adds a listener somewhere that the store will enqueue to the batch.
  2. One possible place could be to store on the internalAtom's atomState.m.l.
  3. To access atomState, here is one way to get it.
function getAtomState(store: Store, atom: AnyAtom): AtomState {
  let atomState: AtomState
  try {
    store.unstable_derive((getAtomState) => {
      atomState = getAtomState(atom)!
      return null as any
    })
  } catch {
    // expect error
  }
  return atomState!
}
  1. We may also need a synchronous way to determine whether an atomState has mounted. One way to do this is to add a property to atomState ml, which is fired synchronously after property m is set or deleted. There may be another way to do this, this is just one idea.
const originalMl = ref.atomState.ml
ref.atomState.ml = () => {
  if (ref.atomState.m) {
    // handle on mount
  } else {
   // handle on unmount
  }
  originalMl?.()
}
  1. The first subscribe case is a little more tricky. Currently subscribeAtom first calls readAtomState before setting property m. We actually need the reverse. So we need a way to trigger a refresh to call internalAtom once more. The tricky part is we need to write in the same batch.
  2. setSelf can be stolen from the first read of internalAtom. Calling setSelf synchronous after readAtomState inside the synchronous mount callback inside subscribeAtom will be fine. We configure internalAtom to have a write function that increments refreshAtom. Calling setSelf causes internalAtom to recompute inside recomputeDependents.
  3. We also need to adjust the scheduling of atom listeners to batch. We set priority to 'H', and adds one more high priority scheduler. This is because the function that enqueues listeners to the batch function could run before recompute dependents in high. So if it enqueues another high function, we can guarantee it runs a the end of high. This lets is add listeners in high, medium, and low and guaranteed to be after recomputeDependents.
const scheduleListeners = () => {
  for (const [listener, priority] of atomState.l) {
    addBatchFunc(batch, priority, listener)
  }
}
// note the extra high priority function
addBatchFunc(batch, 'H', () => addBatchFunc(batch, 'H', scheduleListeners))
  1. Internal atom's read function now needs to return something that is never equal. If read gets called, the effect should be enqueued, so epoch must never match. We can either return an always-incrementing number as its value, or we can return NaN which always fails v === v. This is to ensure registerBatchAtom is always called when atomEffect is read.
  2. Cleanup should also guarantee to run synchronous after recomputeDependents and before atom listeners and mount hooks. To ensure cleanup happens when internal atom unmounts, we may need to move the onunmount logic to a listener that is scheduled in a similar manner. The ml property should help with triggering the scheduleEffect function to be called.

@dai-shi
Copy link
Member Author

dai-shi commented Dec 30, 2024

Thanks for your review!
I think I need to understand the problem better and build my mental model.
I have two questions:

  • Can I forget about recursion support in this PR?
  • Can you create a failing test for "to guarantee that effect runs after recomputeDependent and before listeners and mount hooks"?

@dai-shi
Copy link
Member Author

dai-shi commented Jan 4, 2025

We'll work on a new PR.

@dai-shi dai-shi closed this Jan 4, 2025
@dai-shi dai-shi deleted the partial-sync-effect-3 branch January 6, 2025 03:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants