Skip to content

refactor: extract parse_size and format_size to walrus-utils#3093

Open
wbbradley wants to merge 4 commits intowbbradley/profiling-encodingfrom
wbbradley/resource-management
Open

refactor: extract parse_size and format_size to walrus-utils#3093
wbbradley wants to merge 4 commits intowbbradley/profiling-encodingfrom
wbbradley/resource-management

Conversation

@wbbradley
Copy link
Contributor

No description provided.

Add heap peak tracking via peakmem-alloc and RSS peak via
libc::getrusage() to the profiling binary. Each iteration now reports
peak_heap, peak_rss, and heap expansion ratio. Multi-iteration runs
report max_peak_heap in the summary.
Add --concurrent-blobs N flag that encodes N blobs simultaneously
using std::thread::scope, simulating multi-blob uploads. Reports
per-blob latency, total wall time, and peak memory with per-blob
expansion ratio for direct comparison with single-blob runs.
Replace rayon's into_par_iter() with bounded concurrency using OS
threads for the outer blob encoding loop. This prevents peak memory
from scaling linearly with the number of concurrent blobs, since
each encoding already saturates the rayon thread pool internally.

Uses std::thread::scope with batched OS threads when concurrency > 1,
avoiding rayon pool deadlocks. Adds max_concurrent_blob_encodings
config field (defaults to 1 for sequential encoding).
Move human-readable size parsing and formatting utilities from
profile_encoding example into walrus_utils::size module for reuse
across benchmarks and examples.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant