-
Notifications
You must be signed in to change notification settings - Fork 179
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
OpenAI API available in the opentrons-ai-server (#15169)
# Overview This PR introduces a structured pattern for handling requests and responses within the lambda. It demonstrates actual communication with OpenAI, starting from the lambda and extending back to the client. Additionally, the endpoint is secured, requiring a JWT bearer token for access. ___ **Details:** - **Client for Testing:** Established a dedicated live client to facilitate testing. - **Testing:** Developed tests that exercise the client and all endpoints. ## Status - [x] Deployed to dev and live tested `make build deploy test-live ENV=dev AWS_PROFILE=the-profile` - [x] Deployed to sandbox and live tested `make build deploy test-live ENV=sandbox AWS_PROFILE=the-profile` - [x] `make live-client` with fake=false works on sandbox - [x] `make live client` with fake=false works on dev > [!NOTE] > to exercise the client reach out to @y3rsh for the `test.env` you need
- Loading branch information
Showing
30 changed files
with
785 additions
and
282 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
from typing import List | ||
|
||
from openai import OpenAI | ||
from openai.types.chat import ChatCompletion, ChatCompletionMessage, ChatCompletionMessageParam | ||
|
||
from api.domain.prompts import system_notes | ||
from api.settings import Settings, is_running_on_lambda | ||
|
||
|
||
class OpenAIPredict: | ||
def __init__(self, settings: Settings) -> None: | ||
self.settings: Settings = settings | ||
self.client: OpenAI = OpenAI(api_key=settings.openai_api_key.get_secret_value()) | ||
|
||
def predict(self, prompt: str, chat_completion_message_params: List[ChatCompletionMessageParam] | None = None) -> None | str: | ||
"""The simplest chat completion from the OpenAI API""" | ||
top_p = 0.0 | ||
messages: List[ChatCompletionMessageParam] = [{"role": "system", "content": system_notes}] | ||
if chat_completion_message_params: | ||
messages += chat_completion_message_params | ||
|
||
user_message: ChatCompletionMessageParam = {"role": "user", "content": f"QUESTION/DESCRIPTION: \n{prompt}\n\n"} | ||
messages.append(user_message) | ||
|
||
response: ChatCompletion = self.client.chat.completions.create( | ||
model=self.settings.OPENAI_MODEL_NAME, | ||
messages=messages, | ||
stream=False, | ||
temperature=0.005, | ||
max_tokens=4000, | ||
top_p=top_p, | ||
frequency_penalty=0, | ||
presence_penalty=0, | ||
) | ||
|
||
assistant_message: ChatCompletionMessage = response.choices[0].message | ||
return assistant_message.content | ||
|
||
|
||
def main() -> None: | ||
"""Intended for testing this class locally.""" | ||
if is_running_on_lambda(): | ||
return | ||
from rich import print | ||
from rich.prompt import Prompt | ||
|
||
settings = Settings.build() | ||
openai = OpenAIPredict(settings) | ||
prompt = Prompt.ask("Type a prompt to send to the OpenAI API:") | ||
completion = openai.predict(prompt) | ||
print(completion) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
system_notes = """\ | ||
You are an expert at generating a protocol based on Opentrons Python API v2. | ||
You will be shown the user's question/description and information related to | ||
the Opentrons Python API v2 documentation. And you respond the user's question/description | ||
using only this information. | ||
INSTRUCTIONS: | ||
1) All types of protocols are based on apiLevel 2.15, | ||
thus prepend the following code block | ||
`metadata` and `requirements`: | ||
```python | ||
from opentrons import protocol_api | ||
metadata = { | ||
'protocolName': '[protocol name by user]', | ||
'author': '[user name]', | ||
'description': "[what is the protocol about]" | ||
} | ||
requirements = {"robotType": "[Robot type]", "apiLevel": "2.15"} | ||
``` | ||
2) See the transfer rules <<COMMON RULES for TRANSFER>> below. | ||
3) Learn examples see <<EXAMPLES>> | ||
4) Inside `run` function, according to the description generate the following in order: | ||
- modules | ||
- adapter | ||
- labware | ||
- pipettes | ||
Note that sometimes API names is very long eg., | ||
`Opentrons 96 Flat Bottom Adapter with NEST 96 Well Plate 200 uL Flat` | ||
5) If the pipette is multi-channel eg., P20 Multi-Channel Gen2, please use `columns` method. | ||
\n\n\ | ||
""" |
Oops, something went wrong.