Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions docs/quickstart/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,17 @@
<!--
IMPORTANT: All ```bash commands in this file are tested as part of e2e test suite: /testing/e2e/tests/test_quickstart_readme.py
-->
# Getting Started with Asya🎭
# Getting Started with Asya🎭 Locally

**5-minute guide to running Asya🎭 locally**
**Core idea**: Build multi-step AI/ML pipelines where each step deployed as an [actor](https://en.wikipedia.org/wiki/Actor_model) and scales independently. No infrastructure code in your code - just pure Python.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a small grammatical error here. It should be "is deployed" to be grammatically correct.

Suggested change
**Core idea**: Build multi-step AI/ML pipelines where each step deployed as an [actor](https://en.wikipedia.org/wiki/Actor_model) and scales independently. No infrastructure code in your code - just pure Python.
**Core idea**: Build multi-step AI/ML pipelines where each step is deployed as an [actor](https://en.wikipedia.org/wiki/Actor_model) and scales independently. No infrastructure code in your code - just pure Python.


Asya🎭 is a Kubernetes-native queue-based actor framework for AI/ML workloads. Write pure Python functions, deploy them as actors, and let Asya🎭 handle queues, routing, and autoscaling (0→N pods based on queue depth).
## What You'll Learn

**Core idea**: Build multi-step AI/ML pipelines where each step scales independently. No infrastructure code in your handlers - just pure Python.
- Create a Kind cluster to run Kubernetes locally in Docker, and install KEDA for autoscaling
- Deploy the Asya operator with SQS transport (running via LocalStack)
- Build and deploy your first actor with scale-to-zero capability
- Test autoscaling by sending messages to actor queues
- Optionally add S3 storage, MCP gateway, and Prometheus monitoring

## Prerequisites

Expand Down
8 changes: 8 additions & 0 deletions docs/quickstart/for-data-scientists.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,14 @@

Build and deploy your first Asya actor.

## What You'll Learn

- Write pure Python handlers (functions or classes) for ML pipelines
- Test handlers locally and package them in Docker images
- Deploy actors using AsyncActor CRDs with autoscaling
- Use Flow DSL to build multi-step pipelines with conditional routing
- Handle dynamic routing with envelope mode for AI agents

## Overview

As a data scientist, you focus on writing pure Python functions. Asya handles infrastructure, routing, scaling, and monitoring.
Expand Down
8 changes: 8 additions & 0 deletions docs/quickstart/for-platform-engineers.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,14 @@

Deploy and manage Asya🎭 infrastructure.

## What You'll Learn

- Install and configure Asya operator with transports (SQS/RabbitMQ)
- Deploy gateway and crew actors for pipeline completion
- Support data science teams with templates and IAM configuration
- Set up monitoring with Prometheus and troubleshoot common issues
- Optimize scaling and costs for production workloads

## Overview

As platform engineer, you:
Expand Down
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ nav:
- GETTING STARTED:
- Motivation: motivation.md
- Concepts: concepts.md
- Quickstart: quickstart/README.md
- Getting Started: quickstart/README.md
- For Data Scientists 🧑‍🔬: quickstart/for-data-scientists.md
- For Platform Engineers ⚙️: quickstart/for-platform-engineers.md
- INSTALLATION:
Expand Down
Loading