Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Enabling automation of experiments running v2.0 #469

Open
wants to merge 24 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 20 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
8726ab8
Revising to enable automation of experiments running v1.0
xisen-w Nov 4, 2024
b44bef5
Any new updates
xisen-w Nov 15, 2024
c100876
Revising to enable automation of experiments running v1.0
xisen-w Nov 4, 2024
18370d4
Any new updates
xisen-w Nov 15, 2024
21a99d2
Add template
you-n-g Nov 15, 2024
86ae0b2
Stoping tracking additional env
xisen-w Nov 20, 2024
f94dbff
Merge branch 'automated-evaluation' of https://github.com/microsoft/R…
xisen-w Nov 20, 2024
66ffd6d
Uploading relevant envs
xisen-w Nov 20, 2024
0ef80a5
Adding tests
xisen-w Nov 20, 2024
907d980
Updating
xisen-w Nov 20, 2024
51388d1
Updated collect.py to extract result from trace
xisen-w Nov 23, 2024
af6220e
Update .gitignore to remove the unecessary ones
xisen-w Nov 23, 2024
54c3c6d
"Remove unnecessary files"
xisen-w Nov 23, 2024
78708e4
Merge branch 'automated-evaluation' of https://github.com/microsoft/R…
xisen-w Nov 25, 2024
3f131f3
Merge branch 'main' into automated-evaluation
xisen-w Nov 25, 2024
38bb9e6
Updated to enable automatic collection of experiment result information
xisen-w Nov 25, 2024
10b0053
Updating the env files & Upading test_system file
xisen-w Nov 25, 2024
238f492
Updated relevant env for better testing
xisen-w Nov 25, 2024
68ca63a
Updated README.md
xisen-w Nov 25, 2024
8b18fad
reverting gitignore back
xisen-w Nov 25, 2024
2395dc5
Updates
xisen-w Dec 3, 2024
b7cc98e
README update
xisen-w Dec 3, 2024
0b5a09d
Updates on env README
xisen-w Dec 3, 2024
24cd0c2
Updating collect.py
xisen-w Dec 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions scripts/exp/ablation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Introduction

| name | .env | desc |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this file is not complete

| -- | -- | -- |
| full | full.env | enable all features |
| minicase | minicase.env | enable minicase |



3 changes: 3 additions & 0 deletions scripts/exp/ablation/env/basic.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
KG_IF_USING_VECTOR_RAG=False
KG_IF_USING_GRAPH_RAG=False
KG_IF_ACTION_CHOOSING_BASED_ON_UCB=False
3 changes: 3 additions & 0 deletions scripts/exp/ablation/env/max.env
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, it needs to add the path to the knowledge base.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KG_IF_USING_VECTOR_RAG here should be set to false.

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
KG_IF_USING_VECTOR_RAG=True
KG_IF_USING_GRAPH_RAG=True
KG_IF_ACTION_CHOOSING_BASED_ON_UCB=True
3 changes: 3 additions & 0 deletions scripts/exp/ablation/env/mini-case.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
KG_IF_USING_VECTOR_RAG=True
KG_IF_USING_GRAPH_RAG=False
KG_IF_ACTION_CHOOSING_BASED_ON_UCB=True
3 changes: 3 additions & 0 deletions scripts/exp/ablation/env/pro.env
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, it also needs to add the path to the knowledge base.

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
KG_IF_USING_VECTOR_RAG=True
KG_IF_USING_GRAPH_RAG=False
KG_IF_ACTION_CHOOSING_BASED_ON_UCB=True
148 changes: 148 additions & 0 deletions scripts/exp/tools/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
### Tools Directory

This directory provides scripts to run experiments with different environment configurations, collect results, and demonstrate usage through an example script.

### Directory Structure

scripts/exp/tools/
├── run_envs.sh # Script for running experiments
├── collect.py # Results collection and summary
├── test_system.sh # Usage script for rdagent kaggle loop
├── README.md # This documentation

Tools Overview

1. run_envs.sh: Executes experiments with different environment configurations in parallel.
2. collect.py: Collects and summarizes experiment results into a single file.
3. test_system.sh: Demonstrates how to use the above tools together for experiment execution and result collection [for rdagent kaggle loop]

### Getting Started

Prerequisites

1. Ensure the scripts have execution permissions:

```
chmod +x scripts/exp/tools/run_envs.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can commit the file permission to github directly instead of writting them in README.md

chmod +x scripts/exp/tools/test_system.sh
```

2. Place your .env files in the desired directory for environment configurations.

### Usage

#### 1. Running Experiments with Different Environments

The run_envs.sh script allows running a command with multiple environment configurations in parallel.

**Command Syntax**

```
./run_envs.sh -d <dir_to_.envfiles> -j <number_of_parallel_processes> -- <command>
```

**Example Usage**

Basic example:

```
./run_envs.sh -d env_files -j 1 -- echo "Hello"
```

Practical example:

```
dotenv run -- ./run_envs.sh -d RD-Agent/scripts/exp/ablation/env -j 1 -- python RD-Agent/rdagent/app/kaggle/loop.py
```

**Explanation:**

| Option | Description |
| --- | --- |
| `-d` | Specifies the directory containing .env files. |
| `-j` | Number of parallel processes to run (e.g., 1 for sequential execution). |
| `--` | Separates script options from the command to execute. |
| `<command>` | The command to execute with the environment variables loaded. |


####2.Collecting Results

The collect.py script processes logs and generates a summary JSON file.

**Command Syntax**

```
python collect.py --log_path <path_to_logs> --output_name <summary_filename>
```

**Example Usage**

Collect results from logs:

```
python collect.py --log_path logs --output_name summary.json
```
**Explanation:**

| Option | Description |
| --- | --- |
| `--log_path` | Required. Specifies the directory containing experiment logs. |
| `--output_name` | Optional. The name of the output summary file (default: summary.json). |

#### 3. Example Workflow [for rdagent kaggle loop]

Use the test_system.sh script to demonstrate a complete workflow.

**Steps:**

1. Ensure the scripts are executable:

```
chmod +x scripts/exp/tools/run_envs.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

chmod +x scripts/exp/tools/test_system.sh
```

2. Run the test system:

```
./scripts/exp/tools/test_system.sh
```

This will:
1. Load environment configurations from .env files.
2. Execute experiments using the configurations.

3. Find your logs in the logs directory.

4. Use the collect.py script to summarize results:

```
python collect.py --log_path logs --output_name summary.json
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to give an example of the collected results.


### Troubleshooting

#### Permission Denied

If you encounter a PermissionError when running scripts:

1.Ensure the script has execution permissions:

```
chmod +x ./scripts/exp/tools/run_envs.sh
chmod +x ./scripts/exp/tools/test_system.sh
```

2.Verify file ownership:

```
ls -l ./scripts/exp/tools/
```

### Notes
* Scale parallel processes as needed using the -j parameter.
* Avoid errors by ensuring .env files are correctly formatted.
* Modify test_system.sh to meet your project's specific needs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When should we modifiy the script?

* Add other metrics interested in collect.py to summarize automatically.

For further assistance, refer to the comments within the scripts or reach out to the development team.
83 changes: 83 additions & 0 deletions scripts/exp/tools/collect.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import os
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will the env name (e.g. basic, max, pro) displayed in the collected results?

import json
import argparse
from pathlib import Path
from datetime import datetime
from rdagent.log.storage import FileStorage
from rdagent.scenarios.kaggle.kaggle_crawler import (
leaderboard_scores,
)

def collect_results(log_path) -> list[dict]:
summary = []
log_storage = FileStorage(Path(log_path))
evaluation_metric_direction = None
# Extract score from trace using the same approach as UI
for msg in log_storage.iter_msg():
if "scenario" in msg.tag:
competition_name = msg.content.competition # Find the competition name
leaderboard = leaderboard_scores(competition_name)
evaluation_metric_direction = float(leaderboard[0]) > float(leaderboard[-1])

if "runner result" in msg.tag:
if msg.content.result is not None:
score = msg.content.result
summary.append({
"competition_name": competition_name,
"score": score,
"workspace": msg.content.experiment_workspace.workspace_path,
"evaluation_metric_direction": evaluation_metric_direction
})
return summary

def generate_summary(results, output_path):
summary = {
"configs": {}, #TODO: add config?
"best_result": {"competition_name": None, "score": None},
"timestamp": datetime.now().strftime("%Y%m%d_%H%M%S"),
#Add other metrics that we want to track in the future (eg. is there successive increase?)
}
for result in results:
# Update best result
# If the evaluation metric is higher, it is better
if result["evaluation_metric_direction"]:
if (result["score"] is not None and
(summary["best_result"]["score"] is None or
result["score"] > summary["best_result"]["score"])):
summary["best_result"].update({
"score": result["score"],
"competition_name": result["competition_name"]
})
else:
if (result["score"] is not None and
(summary["best_result"]["score"] is None or
result["score"] < summary["best_result"]["score"])):
summary["best_result"].update({
"score": result["score"],
"competition_name": result["competition_name"]
})

with open(output_path, "w") as f:
json.dump(summary, f, indent=4)

def parse_args():
parser = argparse.ArgumentParser(description='Collect and summarize experiment results')
parser.add_argument('--log_path', type=str, required=True,
help='Path to the log directory containing experiment results')
parser.add_argument('--output_name', type=str, default='summary.json',
help='Name of the output summary file (default: summary.json)')
return parser.parse_args()

if __name__ == "__main__":
args = parse_args()
log_path = Path(args.log_path)

# Verify the log path exists
if not log_path.exists():
raise FileNotFoundError(f"Log path does not exist: {log_path}")

results = collect_results(log_path)
output_path = log_path / args.output_name
generate_summary(results, output_path)
print("Summary generated successfully at", output_path)

51 changes: 51 additions & 0 deletions scripts/exp/tools/run_envs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
#!/bin/sh
cat << "EOF" > /dev/null
Given a directory with *.env files. Run each one.

usage for example:

1) directly run command without extra shared envs
./run_envs.sh -d <dir_to_*.envfiles> -j <number of parallel process> -- <command>

2) load shared envs `.env` before running command with different envs.
dotenv run -- ./run_envs.sh -d <dir_to_*.envfiles> -j <number of parallel process> -- <command>

EOF

# Function to display usage
usage() {
echo "Usage: $0 -d <dir_to_*.envfiles> -j <number of parallel process> -- <command>"
exit 1
}

# Parse command line arguments
while getopts "d:j:" opt; do
case $opt in
d) DIR=$OPTARG ;;
j) JOBS=$OPTARG ;;
*) usage ;;
esac
done

# Shift to get the command
shift $((OPTIND -1))

# Check if directory and jobs are set
if [ -z "$DIR" ] || [ -z "$JOBS" ] || [ $# -eq 0 ]; then
usage
fi

COMMAND="$@"

# Before running commands
echo "Running experiments with following env files:"
find "$DIR" -name "*.env" -exec echo "{}" \;

# Export and run each .env file in parallel
find "$DIR" -name "*.env" | xargs -n 1 -P "$JOBS" -I {} sh -c "
set -a
. {}
set +a
$COMMAND
"

19 changes: 19 additions & 0 deletions scripts/exp/tools/test_system.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/bin/bash

# Test directory setup
TEST_DIR="test_run"
mkdir -p "$TEST_DIR/results"
mkdir -p "$TEST_DIR/logs"

# Define paths
ENV_DIR="/home/v-xisenwang/RD-Agent/scripts/exp/ablation/env"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to use config for the path instead of hard-coded

PYTHON_SCRIPT="/home/v-xisenwang/RD-Agent/rdagent/app/kaggle/loop.py"

# Run the experiment
echo "Running experiments..."
dotenv run -- ./scripts/exp/tools/run_envs.sh -d "$ENV_DIR" -j 4 -- \
python "$PYTHON_SCRIPT" \
--competition "spaceship-titanic" \

# Cleanup (optional - comment out if you want to keep results)
# rm -rf "$TEST_DIR"
Loading