Skip to content

Commit

Permalink
Update README. (#97)
Browse files Browse the repository at this point in the history
  • Loading branch information
boranhan authored Nov 10, 2024
1 parent 9df3ed7 commit d554a91
Showing 1 changed file with 42 additions and 42 deletions.
84 changes: 42 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,39 @@ Important: Free-tier OpenAI accounts may be subject to rate limits, which could


## Usage
Before launching AutoGluon Assistant (AG-A), prepare your data files in the following structure. Here's an example using a dataset from the Titanic Kaggle Competition:

We support two ways of using AutoGluoon Assistant: WebUI and CLI.

### Web UI
The Autogluon Assistant Web UI is a user-friendly application that allows users to leverage the capabilities of the Autogluon-Assistant library through an intuitive web interface.

The web UI enables users to upload datasets, configure Autogluon-Assistant runs with customized settings, preview data, monitor execution progress, view and download results, and supports secure, isolated sessions for concurrent users.

#### To run the Autogluon Assistant Web UI:
Navigate to the project directory and run the app:
````
cd src/autogluon_assistant/ui && streamlit run app.py
````
The Autogluon Assistant Web UI should now be accessible in your web browser at `http://localhost:8501`

#### Add GPT4 Model to the LLM Option:
If you’d like to add additional GPT4 model to the language model (LLM) dropdown:

1. Navigate to src/autogluon_assistant/WebUI/constants.py

2. Locate the `LLM_OPTIONS` variable, which looks like this:
````
LLM_OPTIONS = ["Claude 3.5 with Amazon Bedrock"]
````
3. Add "GPT 4o" to the list
````
LLM_OPTIONS = ["Claude 3.5 with Amazon Bedrock", "GPT 4o"]
````

### CLI

Before launching AG-A CLI, prepare your data files in the following structure:
```
.
├── config # Configuration files directory
│ └── [CONFIG_FILE].yaml # Your configuration file
└── data # Data files directory
├── train.[ext] # Training dataset (required)
├── test.[ext] # Test dataset (required)
Expand All @@ -72,50 +99,23 @@ Note:

Now you can launch the AutoGluon Assistant run using the following command:
```
aga [NAME_OF_DATA_DIR]
# e.g. aga ./toy_data
aga [NAME_OF_DATA_DIR] --presets [PRESET_QUALITY]
# e.g. aga ./toy_data --presets best_quality
```

After the run is complete, model predictions on test dataset are saved into the `aga-output-<timestamp>.csv` file which is formatted according to `sample_submission.csv` file.
We support three presets, including `medium_quality`, `high_quality` and `best_quality`. We use `best_quality` as a default setting.

## Overriding parameters from the command-line
AutoGluon Assistant uses [Hydra](https://hydra.cc) to manage configuration. See [here](https://hydra.cc/docs/advanced/override_grammar/basic/) for the complete override syntax.
You can override specific settings in the YAML configuration defined in [`config.yaml`](https://github.com/autogluon/autogluon-assistant/blob/main/config/config.yaml) using
After the run is complete, model predictions on test dataset are saved into the `aga-output-<timestamp>.csv` file. It will be formatted according to optional `sample_submission.csv` file if provided.

#### Overriding Configs
You can override specific settings in the YAML configuration defined in the [config folder](https://github.com/boranhan/autogluon-assistant/tree/main/src/autogluon_assistant/configs) using
the `config_overrides` parameter with Hydra syntax from the command line.

Here’s an example command with some configuration overrides:
```
autogluon-assistant ./data ./config --output-filename my_output.csv --config-overrides "autogluon.predictor_fit_kwargs.time_limit=120 autogluon.predictor_fit_kwargs.verbosity=3 autogluon.predictor_fit_kwargs.presets=medium_quality llm.temperature=0.7 llm.max_tokens=256"
aga toy_data --config_overrides "feature_transformers=[], autogluon.predictor_fit_kwargs.time_limit=3600"
# OR
aga ./data ./config --output-filename my_output.csv --config-overrides "autogluon.predictor_fit_kwargs.time_limit=120 autogluon.predictor_fit_kwargs.verbosity=3 autogluon.predictor_fit_kwargs.presets=medium_quality llm.temperature=0.7 llm.max_tokens=256"
```

`autogluon-assistant-tools` provides more functionality and utilities for benchmarking, wrapped around autogluon-assistant. Please check out the [repo](https://github.com/autogluon/autogluon-assistant-tools/) for more details.

## Autogluon Assistant Web UI
The Autogluon Assistant Web UI is a user-friendly application that allows users to leverage the capabilities of the Autogluon-Assistant library through an intuitive web interface.

The web UI enables users to upload datasets, configure Autogluon-Assistant runs with customized settings, preview data, monitor execution progress, view and download results, and supports secure, isolated sessions for concurrent users.

#### To run the Autogluon Assistant Web UI:
Navigate to the project directory and run the app:
````
cd src/autogluon_assistant/ui && streamlit run app.py
````
The Autogluon Assistant Web UI should now be accessible in your web browser at `http://localhost:8501`

#### Add GPT4 Model to the LLM Option:
If you’d like to add additional GPT4 model to the language model (LLM) dropdown:

1. Navigate to src/autogluon_assistant/WebUI/constants.py

2. Locate the `LLM_OPTIONS` variable, which looks like this:
````
LLM_OPTIONS = ["Claude 3.5 with Amazon Bedrock"]
````
3. Add "GPT 4o" to the list
````
LLM_OPTIONS = ["Claude 3.5 with Amazon Bedrock", "GPT 4o"]
````
aga toy_data --config_overrides "feature_transformers=[]" --config_overrides "autogluon.predictor_fit_kwargs.time_limit=3600"
```

0 comments on commit d554a91

Please sign in to comment.