Skip to content

Commit

Permalink
OpenAI API available in the opentrons-ai-server (#15169)
Browse files Browse the repository at this point in the history
# Overview

This PR introduces a structured pattern for handling requests and
responses within the lambda. It demonstrates actual communication with
OpenAI, starting from the lambda and extending back to the client.
Additionally, the endpoint is secured, requiring a JWT bearer token for
access.
___

**Details:**
- **Client for Testing:** Established a dedicated live client to
facilitate testing.
- **Testing:** Developed tests that exercise the client and all
endpoints.


## Status

- [x] Deployed to dev and live tested `make build deploy test-live
ENV=dev AWS_PROFILE=the-profile`
- [x] Deployed to sandbox and live tested `make build deploy test-live
ENV=sandbox AWS_PROFILE=the-profile`
- [x] `make live-client` with fake=false works on sandbox
- [x] `make live client` with fake=false works on dev

> [!NOTE]
> to exercise the client reach out to @y3rsh for the `test.env` you need
  • Loading branch information
y3rsh authored May 14, 2024
1 parent a2616cb commit fbcf403
Show file tree
Hide file tree
Showing 30 changed files with 785 additions and 282 deletions.
4 changes: 4 additions & 0 deletions opentrons-ai-server/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,7 @@ results
package
function.zip
requirements.txt
test.env
cached_token.txt
tests/helpers/cached_token.txt
tests/helpers/test.env
31 changes: 21 additions & 10 deletions opentrons-ai-server/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,15 @@ black-check:

.PHONY: ruff
ruff:
python -m pipenv run python -m ruff check . --fix
python -m pipenv run python -m ruff check . --fix --unsafe-fixes

.PHONY: ruff-check
ruff-check:
python -m pipenv run python -m ruff check .

.PHONY: mypy
mypy:
python -m pipenv run python -m mypy aws_actions.py api
python -m pipenv run python -m mypy deploy.py api tests

.PHONY: format-readme
format-readme:
Expand All @@ -56,7 +56,7 @@ pre-commit: fixup unit-test

.PHONY: gen-env
gen-env:
python -m pipenv run python api/settings.py
python -m pipenv run python -m api.settings

.PHONY: unit-test
unit-test:
Expand All @@ -71,13 +71,13 @@ clean-package:
.PHONY: gen-requirements
gen-requirements:
@echo "Generating requirements.txt from Pipfile.lock..."
python -m pipenv requirements --hash > requirements.txt
python -m pipenv requirements > requirements.txt

.PHONY: install-deps
install-deps:
@echo "Installing dependencies to package/ directory..."
mkdir -p package
python -m pipenv run pip install -r requirements.txt --target ./package --upgrade
docker run --rm -v "$$PWD":/var/task "public.ecr.aws/sam/build-python3.12" /bin/sh -c "pip install -r requirements.txt -t ./package"

.PHONY: package-lambda
package-lambda:
Expand All @@ -96,9 +96,20 @@ ENV ?= sandbox
.PHONY: deploy
deploy:
@echo "Deploying to environment: $(ENV)"
python -m pipenv run python aws_actions.py --env $(ENV) --action deploy
python -m pipenv run python deploy.py --env $(ENV)

.PHONY: test-lambda
test-lambda:
@echo "Invoking the latest version of the lambda: $(ENV)"
python -m pipenv run python aws_actions.py --env $(ENV) --action test
.PHONY: direct-chat-completion
direct-chat-completion:
python -m pipenv run python -m api.domain.openai_predict

.PHONY: print-client-settings-vars
print-client-settings-vars:
python -m pipenv run python -m tests.helpers.settings

.PHONY: live-client
live-client:
python -m pipenv run python -m tests.helpers.client

.PHONY: test-live
test-live:
python -m pipenv run python -m pytest tests -m live --env $(ENV)
2 changes: 2 additions & 0 deletions opentrons-ai-server/Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ aws-lambda-powertools = {extras = ["all"], version = "==2.37.0"}
boto3 = "==1.34.97"
boto3-stubs = "==1.34.97"
rich = "==13.7.1"
pyjwt = "==2.8.0"
cryptography = "==42.0.7"

[requires]
python_version = "3.12"
Expand Down
122 changes: 118 additions & 4 deletions opentrons-ai-server/Pipfile.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

21 changes: 12 additions & 9 deletions opentrons-ai-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The Opentrons AI application's server.
1. This allows formatting of of `.md` and `.json` files
1. select the python version `pyenv local 3.12.3`
1. This will create a `.python-version` file in this directory
1. select the node version `nvs` currently 18.19\*
1. select the node version with `nvs` or `nvm` currently 18.19\*
1. Install pipenv and python dependencies `make setup`

## Install a dev dependency
Expand All @@ -31,11 +31,11 @@ The Opentrons AI application's server.

## Stack and structure

### Lambda Pattern
### Tools

- [powertools](https://powertools.aws.dev/)
- [reinvent talk for the pattern](https://www.youtube.com/watch?v=52W3Qyg242Y)
- [for creating docs](https://www.ranthebuilder.cloud/post/serverless-open-api-documentation-with-aws-powertools)
- [pytest]: https://docs.pytest.org/en/
- [openai python api library]: https://pypi.org/project/openai/

### Lambda Code Organizations and Separation of Concerns

Expand All @@ -46,10 +46,13 @@ The Opentrons AI application's server.
- integration
- the integration with other services

[pytest]: https://docs.pytest.org/en/
[openai python api library]: https://pypi.org/project/openai/
## Dev process

## Deploy
1. Make your changes
1. Fix what can be automatically then lent and unit test like CI will `make pre-commit`
1. `make pre-commit` passes
1. deploy to sandbox `make build deploy test-live ENV=sandbox AWS_PROFILE=the-profile`

1. build the package `make build`
1. deploy the package `make deploy ENV=sandbox AWS_PROFILE=robotics_ai_sandbox`
## TODO

- llama-index is gigantic. Have to figure out how to get it in the lambda
55 changes: 55 additions & 0 deletions opentrons-ai-server/api/domain/openai_predict.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
from typing import List

from openai import OpenAI
from openai.types.chat import ChatCompletion, ChatCompletionMessage, ChatCompletionMessageParam

from api.domain.prompts import system_notes
from api.settings import Settings, is_running_on_lambda


class OpenAIPredict:
def __init__(self, settings: Settings) -> None:
self.settings: Settings = settings
self.client: OpenAI = OpenAI(api_key=settings.openai_api_key.get_secret_value())

def predict(self, prompt: str, chat_completion_message_params: List[ChatCompletionMessageParam] | None = None) -> None | str:
"""The simplest chat completion from the OpenAI API"""
top_p = 0.0
messages: List[ChatCompletionMessageParam] = [{"role": "system", "content": system_notes}]
if chat_completion_message_params:
messages += chat_completion_message_params

user_message: ChatCompletionMessageParam = {"role": "user", "content": f"QUESTION/DESCRIPTION: \n{prompt}\n\n"}
messages.append(user_message)

response: ChatCompletion = self.client.chat.completions.create(
model=self.settings.OPENAI_MODEL_NAME,
messages=messages,
stream=False,
temperature=0.005,
max_tokens=4000,
top_p=top_p,
frequency_penalty=0,
presence_penalty=0,
)

assistant_message: ChatCompletionMessage = response.choices[0].message
return assistant_message.content


def main() -> None:
"""Intended for testing this class locally."""
if is_running_on_lambda():
return
from rich import print
from rich.prompt import Prompt

settings = Settings.build()
openai = OpenAIPredict(settings)
prompt = Prompt.ask("Type a prompt to send to the OpenAI API:")
completion = openai.predict(prompt)
print(completion)


if __name__ == "__main__":
main()
37 changes: 37 additions & 0 deletions opentrons-ai-server/api/domain/prompts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
system_notes = """\
You are an expert at generating a protocol based on Opentrons Python API v2.
You will be shown the user's question/description and information related to
the Opentrons Python API v2 documentation. And you respond the user's question/description
using only this information.
INSTRUCTIONS:
1) All types of protocols are based on apiLevel 2.15,
thus prepend the following code block
`metadata` and `requirements`:
```python
from opentrons import protocol_api
metadata = {
'protocolName': '[protocol name by user]',
'author': '[user name]',
'description': "[what is the protocol about]"
}
requirements = {"robotType": "[Robot type]", "apiLevel": "2.15"}
```
2) See the transfer rules <<COMMON RULES for TRANSFER>> below.
3) Learn examples see <<EXAMPLES>>
4) Inside `run` function, according to the description generate the following in order:
- modules
- adapter
- labware
- pipettes
Note that sometimes API names is very long eg.,
`Opentrons 96 Flat Bottom Adapter with NEST 96 Well Plate 200 uL Flat`
5) If the pipette is multi-channel eg., P20 Multi-Channel Gen2, please use `columns` method.
\n\n\
"""
Loading

0 comments on commit fbcf403

Please sign in to comment.