AIOS is the AI Agent Operating System, which embeds large language model (LLM) into the operating system and facilitates the development and deployment of LLM-based AI Agents. AIOS is designed to address problems (e.g., scheduling, context switch, memory management, storage management, tool management, Agent SDK management, etc.) during the development and deployment of LLM-based agents, towards a better AIOS-Agent ecosystem for agent developers and agent users. AIOS includes the AIOS Kernel (the AIOS repository) and the AIOS SDK (this Cerebrum repository). AIOS supports both Web UI and Terminal UI.
The AIOS-Agent SDK is designed for agent users and developers, enabling them to build and run agent applications by interacting with the AIOS kernel.
- [2024-11-26] π₯ Cerebrum is available for public release on PyPI!
-
Clone Repo
git clone https://github.com/agiresearch/Cerebrum.git cd Cerebrum
-
Create Virtual Environment
conda create -n cerebrum-env python=3.10
or
conda create -n cerebrum-env python=3.11
or
# Windows (cmd) python -m venv cerebrum-env # Linux/MacOS python3 -m venv cerebrum-env
-
Activate the environment
conda activate myenv
or
# Windows (cmd) cd cerebrum-env cd Scripts activate.bat cd .. cd .. # Linux/MacOS source cerebrum-env/bin/activate
-
Install the package
pip install -e .
-
Verify installation
python -c "import cerebrum; from cerebrum.client import Cerebrum; print(Cerebrum)"
Tip
Please see our documentation for more information.
-
Start the AIOS Kernel π See here.
-
Run agents
Either run agents that already exist in the local by passing the path to the agent directory
python cerebrum/run_agent.py \
--mode local \
--agent_path <agent_name_or_path> \ # path to the agent directory
--task <task_input> \
--agenthub_url <agenthub_url>
For example, to run the test_agent in the local directory, you can run:
python cerebrum/run_agent.py \
--mode local \
--agent_path cerebrum/example/agents/test_agent \
--task "What is the capital of United States?" \
--agenthub_url https://app.aios.foundation
Or run agents that are uploaded to agenthub by passing the author and agent name
python cerebrum/run_agent.py \
--mode remote \
--agent_author <author> \
--agent_name <agent_name> \
--agent_version <agent_version> \
--task <task_input> \
--agenthub_url <agenthub_url>
For example, to run the test_agent in the agenthub, you can run:
python cerebrum/run_agent.py \
--mode remote \
--agent_author example \
--agent_name test_agent \
--agent_version 0.0.3 \
--task "What is the capital of United States?" \
--agenthub_url https://app.aios.foundation
This guide will walk you through creating and publishing your own agents for AIOS.
First, let's look at how to organize your agent's files. Every agent needs three essential components:
author/
βββ agent_name/
βββ entry.py # Your agent's main logic
βββ config.json # Configuration and metadata
βββ meta_requirements.txt # Additional dependencies
For example, if your name is 'example' and you're building a demo_agent that searches and summarizes articles, your folder structure would look like this:
example/
βββ demo_agent/
βββ entry.py
βββ config.json
βββ meta_requirements.txt
Note: If your agent needs any libraries beyond AIOS's built-in ones, make sure to list them in meta_requirements.txt. Apart from the above three files, you can have any other files in your folder.
Your agent needs a config.json file that describes its functionality. Here's what it should include:
{
"name": "demo_agent",
"description": [
"Demo agent that can help search AIOS-related papers"
],
"tools": [
"demo_author/arxiv"
],
"meta": {
"author": "demo_author",
"version": "0.0.1",
"license": "CC0"
},
"build": {
"entry": "agent.py",
"module": "DemoAgent"
}
}
When setting up your agent, you'll need to specify which tools it will use. Below is a list of all currently available tools and how to reference them in your configuration:
Author | Name | How to call them |
---|---|---|
example | arxiv | example/arxiv |
example | bing_search | example/bing_search |
example | currency_converter | example/currency_converter |
example | wolfram_alpha | example/wolfram_alpha |
example | google_search | example/google_search |
openai | speech_to_text | openai/speech_to_text |
example | web_browser | example/web_browser |
timbrooks | image_to_image | timbrooks/image_to_image |
example | downloader | example/downloader |
example | doc_question_answering | example/doc_question_answering |
stability-ai | text_to_image | stability-ai/text_to_image |
example | text_to_speech | example/text_to_speech |
To use these tools in your agent, simply include their reference (from the "How to Use" column) in your agent's configuration file. For example, if you want your agent to be able to search academic papers and convert currencies, you would include both example/arxiv
and example/currency_converter
in your configuration.
If you would like to create your new tools, you can either integrate the tool within your agent code or you can follow the tool examples in the tool folder to develop your standalone tools. The detailed instructions are in How to develop new tools
Run the following command to upload your agents to the agenthub:
python cerebrum/upload_agent.py \
--agent_path <agent_path> \ # agent path to the agent directory
--agenthub_url <agenthub_url> # the url of the agenthub, default is https://app.aios.foundation
Similar as developing new agents, developing tools also need to follow a simple directory structure:
demo_author/
βββ demo_tool/
βββ entry.py # Contains your tool's main logic
βββ config.json # Tool configuration and metadata
Your tool needs a configuration file that describes its properties. Here's an example of how to set it up:
{
"name": "arxiv",
"description": [
"The arxiv tool that can be used to search for papers on arxiv"
],
"meta": {
"author": "demo_author",
"version": "1.0.6",
"license": "CC0"
},
"build": {
"entry": "tool.py",
"module": "Arxiv"
}
}
In entry.py
, you'll need to implement a tool class which is identified in the config.json with two essential methods:
get_tool_call_format
: Defines how LLMs should interact with your toolrun
: Contains your tool's main functionality
Here's an example:
class Arxiv:
def get_tool_call_format(self):
tool_call_format = {
"type": "function",
"function": {
"name": "demo_author/arxiv",
"description": "Query articles or topics in arxiv",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Input query that describes what to search in arxiv"
}
},
"required": [
"query"
]
}
}
}
return tool_call_format
def run(self, params: dict):
"""
Main tool logic goes here.
Args:
params: Dictionary containing tool parameters
Returns:
Your tool's output
"""
# Your code here
result = do_something(params['param_name'])
return result
When integrating your tool for the agents you develop:
- Use absolute paths to reference your tool in agent configurations
- Example:
/path/to/your/tools/example/your_tool
instead of justauthor/tool_name
Provider π’ | Model Name π€ | Open Source π | Model String β¨οΈ | Backend βοΈ | Required API Key |
---|---|---|---|---|---|
Anthropic | Claude 3.5 Sonnet | β | claude-3-5-sonnet-20241022 | anthropic | ANTHROPIC_API_KEY |
Anthropic | Claude 3.5 Haiku | β | claude-3-5-haiku-20241022 | anthropic | ANTHROPIC_API_KEY |
Anthropic | Claude 3 Opus | β | claude-3-opus-20240229 | anthropic | ANTHROPIC_API_KEY |
Anthropic | Claude 3 Sonnet | β | claude-3-sonnet-20240229 | anthropic | ANTHROPIC_API_KEY |
Anthropic | Claude 3 Haiku | β | claude-3-haiku-20240307 | anthropic | ANTHROPIC_API_KEY |
OpenAI | GPT-4 | β | gpt-4 | openai | OPENAI_API_KEY |
OpenAI | GPT-4 Turbo | β | gpt-4-turbo | openai | OPENAI_API_KEY |
OpenAI | GPT-4o | β | gpt-4o | openai | OPENAI_API_KEY |
OpenAI | GPT-4o mini | β | gpt-4o-mini | openai | OPENAI_API_KEY |
OpenAI | GPT-3.5 Turbo | β | gpt-3.5-turbo | openai | OPENAI_API_KEY |
Gemini 1.5 Flash | β | gemini-1.5-flash | GEMINI_API_KEY | ||
Gemini 1.5 Flash-8B | β | gemini-1.5-flash-8b | GEMINI_API_KEY | ||
Gemini 1.5 Pro | β | gemini-1.5-pro | GEMINI_API_KEY | ||
Gemini 1.0 Pro | β | gemini-1.0-pro | GEMINI_API_KEY | ||
Groq | Llama 3.2 90B Vision | β | llama-3.2-90b-vision-preview | groq | GROQ_API_KEY |
Groq | Llama 3.2 11B Vision | β | llama-3.2-11b-vision-preview | groq | GROQ_API_KEY |
Groq | Llama 3.1 70B | β | llama-3.1-70b-versatile | groq | GROQ_API_KEY |
Groq | Llama Guard 3 8B | β | llama-guard-3-8b | groq | GROQ_API_KEY |
Groq | Llama 3 70B | β | llama3-70b-8192 | groq | GROQ_API_KEY |
Groq | Llama 3 8B | β | llama3-8b-8192 | groq | GROQ_API_KEY |
Groq | Mixtral 8x7B | β | mixtral-8x7b-32768 | groq | GROQ_API_KEY |
Groq | Gemma 7B | β | gemma-7b-it | groq | GROQ_API_KEY |
Groq | Gemma 2B | β | gemma2-9b-it | groq | GROQ_API_KEY |
Groq | Llama3 Groq 70B | β | llama3-groq-70b-8192-tool-use-preview | groq | GROQ_API_KEY |
Groq | Llama3 Groq 8B | β | llama3-groq-8b-8192-tool-use-preview | groq | GROQ_API_KEY |
ollama | All Models | β | model-name | ollama | - |
vLLM | All Models | β | model-name | vllm | - |
HuggingFace | All Models | β | model-name | huggingface | HF_HOME |
@article{mei2024aios,
title={AIOS: LLM Agent Operating System},
author={Mei, Kai and Li, Zelong and Xu, Shuyuan and Ye, Ruosong and Ge, Yingqiang and Zhang, Yongfeng}
journal={arXiv:2403.16971},
year={2024}
}
@article{ge2023llm,
title={LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem},
author={Ge, Yingqiang and Ren, Yujie and Hua, Wenyue and Xu, Shuyuan and Tan, Juntao and Zhang, Yongfeng},
journal={arXiv:2312.03815},
year={2023}
}
For how to contribute, see CONTRIBUTE. If you would like to contribute to the codebase, issues or pull requests are always welcome!
If you would like to join the community, ask questions, chat with fellows, learn about or propose new features, and participate in future developments, join our Discord Community!
For issues related to Cerebrum development, we encourage submitting issues, pull requests, or initiating discussions in AIOS Discord Channel. For other issues please feel free to contact the AIOS Foundation ([email protected]).