From dataset to deployed API in minutes
ML Assistant CLI is a developer-first command-line tool that unifies the entire ML lifecycle - from data preprocessing to cloud deployment - with AI-guided suggestions and one-click deployments.
- End-to-end ML workflow in a single CLI
- AI-guided suggestions for data quality and model improvements
- BentoML integration for reproducible model packaging
- Multi-cloud deployment (BentoCloud, Azure ML, AWS SageMaker HyperPod)
- Production-ready with monitoring, rollbacks, and traffic management
- Beginner-friendly with sensible defaults and clear guidance
# Install from PyPI
pip install ml-assistant-cli
# Or install with cloud support
pip install ml-assistant-cli[cloud]
# Verify installation
mlcli --help# Run directly with Docker
docker run -it --rm -v $(pwd):/home/mlcli/workspace santhoshkumar0918/ml-assistant-cli:latest
# Or create convenient alias
alias mlcli="docker run -it --rm -v $(pwd):/home/mlcli/workspace santhoshkumar0918/ml-assistant-cli:latest"
# Then use normally
mlcli --help# Install with pipx for isolated environment
pipx install ml-assistant-cli
mlcli --help# Clone and install
git clone https://github.com/mlcli/mlcli.git
cd mlcli
pip install -e .mlcli init --name my-ml-project
cd my-ml-project# Add your dataset to data/raw/
mlcli preprocess --input data/raw/your_data.csv --target target_columnmlcli trainmlcli evaluate
mlcli suggestmlcli predict --input new_data.csv --output predictions.csvmlcli package
mlcli deploy --provider bentocloud
mlcli monitormy-ml-project/
├── data/
│ ├── raw/ # Original datasets
│ ├── processed/ # Cleaned data
│ └── external/ # External datasets
├── models/ # Trained models
├── reports/ # Analysis reports
├── deployments/ # Deployment configs
├── mlcli.yaml # Configuration
└── README.md
Customize your ML pipeline in mlcli.yaml:
project_name: my-ml-project
description: My awesome ML project
data:
target_column: target
test_size: 0.2
missing_value_strategy: auto
scaling_strategy: standard
model:
algorithms: [logistic_regression, random_forest, xgboost]
hyperparameter_tuning: true
cv_folds: 5
deployment:
provider: bentocloud
scaling_min: 1
scaling_max: 3
instance_type: cpu.2- Project initialization
- Data preprocessing and analysis
- Model training with hyperparameter optimization
- Model evaluation and metrics
- AI-guided suggestions
- Batch predictions
- BentoML packaging
- BentoCloud deployment
- Model monitoring
- Deployment rollbacks
- Azure ML integration
- AWS SageMaker HyperPod support
- Advanced deployment strategies
- CI/CD integration
We welcome contributions! Please see our Contributing Guide for details.
MIT License - see LICENSE for details.
Built with ❤️ for the ML community