Skip to content

Commit cc1dbad

Browse files
authored
README update (#17)
* README update * README update * added link to LS 0.1.0 release note
1 parent 136a564 commit cc1dbad

File tree

1 file changed

+40
-38
lines changed

1 file changed

+40
-38
lines changed

README.md

Lines changed: 40 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -4,19 +4,24 @@
44

55
llama-stack-client-swift brings the inference and agents APIs of [Llama Stack](https://github.com/meta-llama/llama-stack) to iOS.
66

7+
**Update: January 27, 2025** The llama-stack-client-swift SDK version has been updated to 0.1.0, working with Llama Stack 0.1.0 ([release note](https://github.com/meta-llama/llama-stack/releases/tag/v0.1.0)).
8+
79
## Features
810

911
- **Inference & Agents:** Leverage remote Llama Stack distributions for inference, code execution, and safety.
1012
- **Custom Tool Calling:** Provide Swift tools that Llama agents can understand and use.
1113

12-
## Quick Demo
13-
See [here](https://github.com/meta-llama/llama-stack-apps/tree/ios_demo/examples/ios_quick_demo/iOSQuickDemo) for a complete iOS demo ([video](https://drive.google.com/file/d/1HnME3VmsYlyeFgsIOMlxZy5c8S2xP4r4/view?usp=sharing)) using a remote Llama Stack server for inferencing.
14+
## iOS Demos
15+
See [here](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/ios_quick_demo) for a quick iOS demo ([video](https://drive.google.com/file/d/1HnME3VmsYlyeFgsIOMlxZy5c8S2xP4r4/view?usp=sharing)) using a remote Llama Stack server for inferencing.
16+
17+
For a more advanced demo using the Llama Stack Agent API and custom tool calling feature, see the [iOS Calendar Assistant demo](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/ios_calendar_assistant).
18+
1419

1520
## Installation
1621

1722
1. Click "Xcode > File > Add Package Dependencies...".
1823

19-
2. Add this repo URL at the top right: `https://github.com/meta-llama/llama-stack-client-swift`.
24+
2. Add this repo URL at the top right: `https://github.com/meta-llama/llama-stack-client-swift` and 0.1.0 in the Dependency Rule, then click Add Package.
2025

2126
3. Select and add `llama-stack-client-swift` to your app target.
2227

@@ -27,68 +32,65 @@ See [here](https://github.com/meta-llama/llama-stack-apps/tree/ios_demo/examples
2732
```
2833
conda create -n llama-stack python=3.10
2934
conda activate llama-stack
30-
pip install llama-stack=0.1.0
35+
pip install --no-cache llama-stack==0.1.0 llama-models==0.1.0 llama-stack-client==0.1.0
3136
```
37+
3238
Then, either:
3339
```
34-
llama stack build --template fireworks --image-type conda
40+
PYPI_VERSION=0.1.0 llama stack build --template fireworks --image-type conda
3541
export FIREWORKS_API_KEY="<your_fireworks_api_key>"
3642
llama stack run fireworks
3743
```
3844
or
3945
```
40-
llama stack build --template together --image-type conda
46+
PYPI_VERSION=0.1.0 llama stack build --template together --image-type conda
4147
export TOGETHER_API_KEY="<your_together_api_key>"
4248
llama stack run together
4349
```
4450

4551
The default port is 5000 for `llama stack run` and you can specify a different port by adding `--port <your_port>` to the end of `llama stack run fireworks|together`.
4652

47-
6. Replace the `RemoteInference` url below with the your host IP and port:
53+
6. Replace the `RemoteInference` url string below with the host IP and port of the remote Llama Stack distro in Step 5:
4854

4955
```swift
5056
import LlamaStackClient
5157

5258
let inference = RemoteInference(url: URL(string: "http://127.0.0.1:5000")!)
59+
```
60+
Below is an example code snippet to use the Llama Stack inference API. See the iOS Demos above for complete code.
5361

54-
do {
55-
for await chunk in try await inference.chatCompletion(
62+
```swift
63+
for await chunk in try await inference.chatCompletion(
5664
request:
5765
Components.Schemas.ChatCompletionRequest(
5866
messages: [
59-
.UserMessage(Components.Schemas.UserMessage(
60-
content: .case1(userInput),
61-
role: .user)
67+
.user(
68+
Components.Schemas.UserMessage(
69+
content:
70+
.InterleavedContentItem(
71+
.text(Components.Schemas.TextContentItem(
72+
text: userInput,
73+
_type: .text
74+
)
75+
)
76+
),
77+
role: .user
6278
)
63-
], model_id: "meta-llama/Llama-3.1-8B-Instruct",
79+
)
80+
],
81+
model_id: "meta-llama/Llama-3.1-8B-Instruct",
6482
stream: true)
6583
) {
6684
switch (chunk.event.delta) {
67-
case .TextDelta(let s):
68-
print(s.text)
69-
break
70-
case .ImageDelta(let s):
71-
print("> \(s)")
72-
break
73-
case .ToolCallDelta(let s):
74-
print("> \(s)")
75-
break
85+
case .text(let s):
86+
message += s.text
87+
break
88+
case .image(let s):
89+
print("> \(s)")
90+
break
91+
case .tool_call(let s):
92+
print("> \(s)")
93+
break
7694
}
7795
}
78-
}
79-
catch {
80-
print("Error: \(error)")
81-
}
8296
```
83-
84-
### Syncing the API spec
85-
86-
Llama Stack `Types.swift` file is generated from the Llama Stack [API spec](https://github.com/meta-llama/llama-stack/blob/main/docs/resources/llama-stack-spec.yaml) in the main [Llama Stack repo](https://github.com/meta-llama/llama-stack).
87-
88-
```
89-
scripts/generate_swift_types.sh
90-
```
91-
92-
By default, this script will download the latest API spec from the main branch of the Llama Stack repo. You can set `LLAMA_STACK_DIR` to a local Llama Stack repo to use a local copy of the API spec instead.
93-
94-
This will update the `openapi.yaml` file in the Llama Stack Swift SDK source folder `Sources/LlamaStackClient`.

0 commit comments

Comments
 (0)