-
Notifications
You must be signed in to change notification settings - Fork 14
feat: Add cloud storage integration with S3 upload/download and snapshot management #107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🟡 Heimdall Review Status
|
e8ad7c4
to
b7a3663
Compare
e11628b
to
715d03f
Compare
92cab4c
to
c758c68
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks awesome! I love all the new UI elements and how you integrated machine type/etc. Is there a way to make this agnostic to S3?
It looks like we're tying the report generation to S3 now, but users may want to use other cloud storage, or something like Github Actions.
- export: downloads and uploads to s3 - can be done using
aws s3 cp
outside of the benchmark tool - serving from s3: this is just a static file server pulling from S3 - could just use an existing solution to serve static files
One philosophy I think we should adopt here is: do one thing and do it well. I think adding S3 download/upload/serving is nice to have, but I'm not sure it makes sense to include in the external repo that other teams may use who may not use S3.
The point of serving data from static files instead of a backend is that it makes deployment super simple. All of this work seems compatible with the static file interface, so adding S3 seems slightly unnecessary.
scripts/setup-initial-snapshot.sh
Outdated
echo "Copying reth snapshot to $DESTINATION" | ||
|
||
mkdir -p "$DESTINATION" | ||
./agent_init --gbs-network=$NETWORK --gbs-config-name=base-reth-cbnode --gbs-directory=$DESTINATION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this script defined? I don't think this should be in the external repo right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah I need to update this, it's using snapio
} | ||
|
||
// GetVersion returns the version of the Reth client | ||
func (r *RethClient) GetVersion(ctx context.Context) (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea!
} | ||
|
||
// Fallback to optimized rsync | ||
if err := b.copyWithOptimizedRsync(initialSnapshotPath, testSnapshotPath); err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this support ZFS? Previously this was handled by the setup script which handled the copying. The nice thing about that was that we could use zfs clone
instead of rsync
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately during testing, I couldn't use zfs in the k8s architecture without creating new daemons, e.t.c in compute
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, yeah I just wanted to confirm that I can still use ZFS on my test machine with this new code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have all POW to contract this work
🚀 Core Features Added:
AWS S3 Cloud Storage Integration
export-to-cloud
command to upload entire output directories to S3--enable-s3
flagImport/Export System for Benchmark Results
import-runs
command to import benchmark results from local files or remote URLsEnhanced Snapshot Management System
Machine Information Collection
Report Backend API Improvements
Frontend Enhancements
🔧 Technical Improvements: