Skip to content

Excessive file writes from account state updates — lock file thrashing #436

@mrm007

Description

@mrm007

Summary

The plugin writes antigravity-accounts.json far too frequently during normal operation. Each write acquires a proper-lockfile lock (mkdir/rmdir cycle), writes a temp file, and renames it — producing 4+ FS events per save. During active use with rate limit cycling across multiple accounts/models, this fires dozens of times per minute, thrashing the config directory.

This caused a downstream issue where an editor (Zed) watching ~/.config/opencode/ consumed 50GB of memory from the unbounded FS event stream and crashed the system.

Environment

  • Plugin version: 1.5.0 (via opencode-antigravity-auth)
  • proper-lockfile: 4.1.2
  • OS: macOS 15.7.3 (Apple M4, 16GB RAM)
  • Config dir: ~/.config/opencode/

Root Cause

saveAccounts() in storage.js is called on every state mutation:

  • Account rotation (rate limit → cycle to next account)
  • Rate limit timestamp update (rateLimitResetTimes per model variant)
  • Quota cache update (cachedQuota, cachedQuotaUpdatedAt)
  • Fingerprint regeneration
  • Cooldown state changes

Each call does the full lock → read → merge → write-temp → rename → unlock cycle:

mkdir  antigravity-accounts.json.lock    // acquire
write  antigravity-accounts.json.XXXX.tmp
rename antigravity-accounts.json.XXXX.tmp → antigravity-accounts.json
rmdir  antigravity-accounts.json.lock    // release

Plus proper-lockfile runs a heartbeat (utimes every 5s) while the lock is held.

With 2 accounts, 5+ model variants each, and active rate limit cycling, the rateLimitResetTimes and cachedQuota fields change constantly — each change triggers a full save.

From the accounts file during a typical session:

"rateLimitResetTimes": {
  "gemini-antigravity:gemini-3-flash-preview": ...,
  "gemini-cli:gemini-3-pro-preview": ...,
  "gemini-cli:gemini-3-flash-preview": ...,
  "gemini-antigravity:antigravity-gemini-3-pro": ...,
  "gemini-cli:antigravity-gemini-3-pro": ...
}

Each of these entries updating independently means 5+ saves just for rate limit state on a single account.

Suggested Fix

Debounce writes. Instead of writing on every state mutation:

  1. Mark state as dirty on mutation
  2. Flush to disk on a timer (e.g., every 2-5 seconds) or on process exit
  3. Coalesce multiple mutations into a single write

Something like:

let dirty = false;
let flushTimer: NodeJS.Timeout | null = null;

function markDirty() {
  dirty = true;
  if (!flushTimer) {
    flushTimer = setTimeout(flush, 2000);
  }
}

async function flush() {
  flushTimer = null;
  if (!dirty) return;
  dirty = false;
  await saveAccounts(currentState);
}

// Also flush on process exit (already have onExit handler via proper-lockfile)

This would reduce dozens of writes per minute to ~1 write every 2 seconds at most, eliminating the FS event storm.

Impact

  • Unnecessary disk I/O and CPU from lock acquisition overhead
  • FS event storms that overwhelm file watchers in editors/tools
  • In my case: Zed editor watching ~/.config/opencode/ hit an unbounded memory leak from the event stream, consumed 50GB (16GB RAM + 34GB swap), filled the disk, crashed WindowServer, and required a hard reboot

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/configConfiguration files, setupbugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions