Skip to content

Commit 19a9ae3

Browse files
diogoncalvesMiNeves00actions-userbrunoalho99
authored
LLMstudio Version 1.0.6: langraph integration (#217)
* Feature/langraph integration (#215) * chore: Provider Unit Tests (#173) * chore: added unit tests for core provider. small bugfix on calculate_metrics of provider * added unit tests and docstring for join chunks * added unit tests and docstrings for calculate_cost on provider * added unit tests and docstrings for input_to_string on provider * added unit tests and docstrings for chat and achat * added unit tests and docstrings for chat and achat * chore: cleaned provider unit tests * chore: separated provider tests into different files. fixed some of its tests * chore: linted code * chore: deleted some comments * chore: linted * chore: Added Azure Provider Unit Tests (#176) * chore: added unit tests for azure provider * chore: added more unit tests and docstrings on azure, removed redundant comments * chore: added unit tests for generate client on Azure Provider * chore: separated azure unit tests into separate files. fixed some of its tests. * chore: linted code * chore: new line Signed-off-by: Diogo Goncalves <[email protected]> --------- Signed-off-by: Diogo Goncalves <[email protected]> Co-authored-by: Diogo Goncalves <[email protected]> * [fix] bump prerelease version in pyproject.toml * chore: rename action * feat: added action to run tests on PR * chore: comments * fix: fix azure config tests * chore: style format * fix: tests workflow * Feature/prompt management (#200) * [feat] prompt management * [feat] testing * [feat] only one active prompt * [fix] bump prerelease version in pyproject.toml * [bugfix] return empty prompt * [fix] bump prerelease version in pyproject.toml * Update CONTRIBUTING.md Signed-off-by: Diogo Goncalves <[email protected]> * Feat/ Use Openai Usage to calculate Cache and Reasoning Costs (#199) * feat: collects usage from stream and non stream openai calls * chore: refactored to provider to have a Metrics obj * feat: calculate_metrics now takes into account cached & reasoning tokens. Prices of openai models updated * fix: added caching tokens to model config obj * chore: added integration test for cache and reasoning * chore: added integration test for usage retrieval when max tokens reached * chore: uncommented runs from examples/core.py * fix: bugfix regarding usage on function calling. added a test for this * chore: merged with develop * chore: extracted provider data structures to another file * chore: renamed to private methods some within provider. splitted integration tests into 2 files * chore: deletion of a todo comment * chore: update poetry.lock * chore: specify python versions * chore: moving langchain integration tests to sdk * chore: format * feat: added support for o3-mini and updated o1-mini prices. also updated integration tests to support o3 (#202) * chore: removed duplicated code; removed duplicated integration tests * chore: updated github actions to run integration tests * chore: fixing github actions * chore: fixing github actions again * chore: fixing github actions again-x2 * chore: fixing github actions again-x2 * chore: added cache of dependencies to integration-tests in githubaction * chore: updated integration-tests action to inject github secrets into env * Feat/bedrock support for Nova models through the ConverseAPI (#207) * feat: added support for bedrock nova models * feat: tokens are now read from usage if available to ensure accuracy * chore: removed duplicated integration tests folder in wrong place * feat: refactored bedrock provider into being a single file instead of folder * chore: renamed bedrock to bedrock-converse in examples/core.py * chore: renamed bedrock in config.yaml * [fix] bump prerelease version in pyproject.toml * [fix] bump prerelease version in pyproject.toml * [fix] bump prerelease version in pyproject.toml * Update pyproject.toml updated llmstudio-tracker version Signed-off-by: Miguel Neves <[email protected]> * [fix] bump prerelease version in pyproject.toml * chore: updated llmstudio sdk poetry.lock * Feat/converse support images (#211) * feat: added converse-api support for images in input. started making an integration test for this. * chore: added integration test for converse image sending * chore: send images integration test now also tests for openai * chore: integration test of send_imgs added async testing * chore: updated examples core.py to also have send images * feat: bedrock image input is now same contract as openai * chore: ChatCompletionLLMstudio print now hides large image bytes for readability * chore: fixes in the pretty print of ChatCompletionLLMstudio * chore: small fix in examples/core.py * fix: test_send_imgs had bug on reading env * chore: made clean_print optional on chatcompletions; image from url is directly converted to bytes * [fix] bump prerelease version in pyproject.toml * [fix] bump prerelease version in pyproject.toml * [fix] bump prerelease version in pyproject.toml * feat: adapt langchain integration * chore: update lock * chore: make format --------- Signed-off-by: Diogo Goncalves <[email protected]> Signed-off-by: Miguel Neves <[email protected]> Co-authored-by: Miguel Neves <[email protected]> Co-authored-by: GitHub Actions <[email protected]> Co-authored-by: brunoalho99 <[email protected]> Co-authored-by: brunoalho <[email protected]> Co-authored-by: Miguel Neves <[email protected]> * [fix] bump prerelease version in pyproject.toml * [fix] bump prerelease version in pyproject.toml * Update core.py Signed-off-by: Diogo Goncalves <[email protected]> * Update core.py Signed-off-by: Diogo Goncalves <[email protected]> * Update core.py Signed-off-by: Diogo Goncalves <[email protected]> * Update core.py Signed-off-by: Diogo Goncalves <[email protected]> --------- Signed-off-by: Diogo Goncalves <[email protected]> Signed-off-by: Miguel Neves <[email protected]> Co-authored-by: Miguel Neves <[email protected]> Co-authored-by: GitHub Actions <[email protected]> Co-authored-by: brunoalho99 <[email protected]> Co-authored-by: brunoalho <[email protected]> Co-authored-by: Miguel Neves <[email protected]>
1 parent f9539b7 commit 19a9ae3

File tree

5 files changed

+615
-70
lines changed

5 files changed

+615
-70
lines changed

examples/core.py

+3-12
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ def run_provider(provider, model, api_key=None, **kwargs):
1212
print(f"\n\n###RUNNING for <{provider}>, <{model}> ###")
1313
llm = LLMCore(provider=provider, api_key=api_key, **kwargs)
1414

15-
latencies = {}
15+
latencies = {}
1616
print("\nAsync Non-Stream")
1717
chat_request = build_chat_request(model, chat_input="Hello, my name is Jason", is_stream=False)
1818
string = """
@@ -51,8 +51,7 @@ def run_provider(provider, model, api_key=None, **kwargs):
5151

5252
response_async = asyncio.run(llm.achat(**chat_request))
5353
pprint(response_async)
54-
latencies["async (ms)"]= response_async.metrics["latency_s"]*1000
55-
54+
latencies["async (ms)"]= response_async.metrics["latency_s"]*1000
5655

5756
print("\nAsync Stream")
5857
async def async_stream():
@@ -81,7 +80,6 @@ async def async_stream():
8180

8281
print("\nSync Stream")
8382
chat_request = build_chat_request(model, chat_input="Hello, my name is Mary", is_stream=True)
84-
8583

8684
response_sync_stream = llm.chat(**chat_request)
8785
for p in response_sync_stream:
@@ -135,19 +133,13 @@ def multiple_provider_runs(provider:str, model:str, num_runs:int, api_key:str, *
135133
for _ in range(num_runs):
136134
latencies = run_provider(provider=provider, model=model, api_key=api_key, **kwargs)
137135
pprint(latencies)
138-
139-
136+
140137
def run_chat_all_providers():
141138
# OpenAI
142139
multiple_provider_runs(provider="openai", model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"], num_runs=1)
143140
multiple_provider_runs(provider="openai", model="o3-mini", api_key=os.environ["OPENAI_API_KEY"], num_runs=1)
144141
#multiple_provider_runs(provider="openai", model="o1-preview", api_key=os.environ["OPENAI_API_KEY"], num_runs=1)
145142

146-
# Azure
147-
multiple_provider_runs(provider="azure", model="gpt-4o-mini", num_runs=1, api_key=os.environ["AZURE_API_KEY"], api_version=os.environ["AZURE_API_VERSION"], api_endpoint=os.environ["AZURE_API_ENDPOINT"])
148-
#multiple_provider_runs(provider="azure", model="gpt-4o", num_runs=1, api_key=os.environ["AZURE_API_KEY"], api_version=os.environ["AZURE_API_VERSION"], api_endpoint=os.environ["AZURE_API_ENDPOINT"])
149-
#multiple_provider_runs(provider="azure", model="o1-mini", num_runs=1, api_key=os.environ["AZURE_API_KEY"], api_version=os.environ["AZURE_API_VERSION"], api_endpoint=os.environ["AZURE_API_ENDPOINT"])
150-
#multiple_provider_runs(provider="azure", model="o1-preview", num_runs=1, api_key=os.environ["AZURE_API_KEY"], api_version=os.environ["AZURE_API_VERSION"], api_endpoint=os.environ["AZURE_API_ENDPOINT"])
151143

152144
# Azure
153145
multiple_provider_runs(provider="azure", model="gpt-4o-mini", num_runs=1, api_key=os.environ["AZURE_API_KEY"], api_version=os.environ["AZURE_API_VERSION"], api_endpoint=os.environ["AZURE_API_ENDPOINT"])
@@ -156,7 +148,6 @@ def run_chat_all_providers():
156148
#multiple_provider_runs(provider="azure", model="o1-preview", num_runs=1, api_key=os.environ["AZURE_API_KEY"], api_version=os.environ["AZURE_API_VERSION"], api_endpoint=os.environ["AZURE_API_ENDPOINT"])
157149

158150

159-
160151
#multiple_provider_runs(provider="anthropic", model="claude-3-opus-20240229", num_runs=1, api_key=os.environ["ANTHROPIC_API_KEY"])
161152

162153
#multiple_provider_runs(provider="azure", model="o1-preview", num_runs=1, api_key=os.environ["AZURE_API_KEY"], api_version=os.environ["AZURE_API_VERSION"], api_endpoint=os.environ["AZURE_API_ENDPOINT"])

0 commit comments

Comments
 (0)