You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
-112
Original file line number
Diff line number
Diff line change
@@ -15,8 +15,6 @@ This repository contains Swift community-maintained implementation over [OpenAI]
15
15
-[Installation](#installation)
16
16
-[Usage](#usage)
17
17
-[Initialization](#initialization)
18
-
-[Completions](#completions)
19
-
-[Completions Streaming](#completions-streaming)
20
18
-[Chats](#chats)
21
19
-[Chats Streaming](#chats-streaming)
22
20
-[Images](#images)
@@ -84,115 +82,6 @@ let openAI = OpenAI(configuration: configuration)
84
82
85
83
Once token you posses the token, and the instance is initialized you are ready to make requests.
86
84
87
-
### Completions
88
-
89
-
Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.
90
-
91
-
**Request**
92
-
93
-
```swift
94
-
structCompletionsQuery: Codable {
95
-
/// ID of the model to use.
96
-
publiclet model: Model
97
-
/// The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
98
-
publiclet prompt: String
99
-
/// What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
100
-
publiclet temperature: Double?
101
-
/// The maximum number of tokens to generate in the completion.
102
-
publiclet maxTokens: Int?
103
-
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
104
-
publiclet topP: Double?
105
-
/// Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
106
-
publiclet frequencyPenalty: Double?
107
-
/// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
108
-
publiclet presencePenalty: Double?
109
-
/// Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
110
-
publiclet stop: [String]?
111
-
/// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
112
-
publiclet user: String?
113
-
}
114
-
```
115
-
116
-
**Response**
117
-
118
-
```swift
119
-
structCompletionsResult: Codable, Equatable{
120
-
publicstructChoice: Codable, Equatable{
121
-
publiclet text: String
122
-
publiclet index: Int
123
-
}
124
-
125
-
publiclet id: String
126
-
publiclet object: String
127
-
publiclet created: TimeInterval
128
-
publiclet model: Model
129
-
publiclet choices: [Choice]
130
-
publiclet usage: Usage
131
-
}
132
-
```
133
-
**Example**
134
-
135
-
```swift
136
-
let query =CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?", temperature: 0, maxTokens: 100, topP: 1, frequencyPenalty: 0, presencePenalty: 0, stop: ["\\n"])
137
-
openAI.completions(query: query) { result in
138
-
//Handle result here
139
-
}
140
-
//or
141
-
let result =tryawait openAI.completions(query: query)
142
-
```
143
-
144
-
```
145
-
(lldb) po result
146
-
▿ CompletionsResult
147
-
- id : "cmpl-6P9be2p2fQlwB7zTOl0NxCOetGmX3"
148
-
- object : "text_completion"
149
-
- created : 1671453146.0
150
-
- model : OpenAI.Model.textDavinci_003
151
-
▿ choices : 1 element
152
-
▿ 0 : Choice
153
-
- text : "\n\n42 is the answer to the ultimate question of life, the universe, and everything, according to the book The Hitchhiker\'s Guide to the Galaxy."
154
-
- index : 0
155
-
```
156
-
157
-
#### Completions Streaming
158
-
159
-
Completions streaming is available by using `completionsStream` function. Tokens will be sent one-by-one.
160
-
161
-
**Closures**
162
-
```swift
163
-
openAI.completionsStream(query: query) { partialResult in
164
-
switch partialResult {
165
-
case .success(let result):
166
-
print(result.choices)
167
-
case .failure(let error):
168
-
//Handle chunk error here
169
-
}
170
-
} completion: { error in
171
-
//Handle streaming error here
172
-
}
173
-
```
174
-
175
-
**Combine**
176
-
177
-
```swift
178
-
openAI
179
-
.completionsStream(query: query)
180
-
.sink { completion in
181
-
//Handle completion result here
182
-
} receiveValue: { result in
183
-
//Handle chunk here
184
-
}.store(in: &cancellables)
185
-
```
186
-
187
-
**Structured concurrency**
188
-
```swift
189
-
fortryawait result in openAI.completionsStream(query: query) {
190
-
//Handle result here
191
-
}
192
-
```
193
-
194
-
Review [Completions Documentation](https://platform.openai.com/docs/api-reference/completions) for more info.
195
-
196
85
### Chats
197
86
198
87
Using the OpenAI Chat API, you can build your own applications with `gpt-3.5-turbo` to do things like:
@@ -933,7 +822,6 @@ Read more about Cosine Similarity [here](https://en.wikipedia.org/wiki/Cosine_si
933
822
The library contains built-in [Combine](https://developer.apple.com/documentation/combine) extensions.
Copy file name to clipboardexpand all lines: Sources/OpenAI/Public/Models/Models/Models.swift
-13
Original file line number
Diff line number
Diff line change
@@ -80,19 +80,6 @@ public extension Model {
80
80
81
81
/// Snapshot of `gpt-3.5-turbo-16k` from June 13th 2023. Unlike `gpt-3.5-turbo-16k`, this model will not receive updates, and will be deprecated 3 months after a new version is released.
/// Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports inserting completions within text.
87
-
staticlettextDavinci_003="text-davinci-003"
88
-
/// Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning.
89
-
staticlettextDavinci_002="text-davinci-002"
90
-
/// Very capable, faster and lower cost than Davinci.
91
-
staticlettextCurie="text-curie-001"
92
-
/// Capable of straightforward tasks, very fast, and lower cost.
93
-
staticlettextBabbage="text-babbage-001"
94
-
/// Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.
Copy file name to clipboardexpand all lines: Sources/OpenAI/Public/Protocols/OpenAIProtocol.swift
-35
Original file line number
Diff line number
Diff line change
@@ -9,41 +9,6 @@ import Foundation
9
9
10
10
publicprotocolOpenAIProtocol{
11
11
12
-
/**
13
-
This function sends a completions query to the OpenAI API and retrieves generated completions in response. The Completions API enables you to build applications using OpenAI's language models, like the powerful GPT-3.
14
-
15
-
Example:
16
-
```
17
-
let query = CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?")
18
-
openAI.completions(query: query) { result in
19
-
//Handle result here
20
-
}
21
-
```
22
-
23
-
- Parameters:
24
-
- query: A `CompletionsQuery` object containing the input parameters for the API request. This includes the prompt, model, temperature, max tokens, and other settings.
25
-
- completion: A closure which receives the result when the API request finishes. The closure's parameter, `Result<CompletionsResult, Error>`, will contain either the `CompletionsResult` object with the generated completions, or an error if the request failed.
This function sends a completions query to the OpenAI API and retrieves generated completions in response. The Completions API enables you to build applications using OpenAI's language models, like the powerful GPT-3. The result is returned by chunks.
31
-
32
-
Example:
33
-
```
34
-
let query = CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?")
35
-
openAI.completions(query: query) { result in
36
-
//Handle result here
37
-
}
38
-
```
39
-
40
-
- Parameters:
41
-
- query: A `CompletionsQuery` object containing the input parameters for the API request. This includes the prompt, model, temperature, max tokens, and other settings.
42
-
- onResult: A closure which receives the result when the API request finishes. The closure's parameter, `Result<CompletionsResult, Error>`, will contain either the `CompletionsResult` object with the generated completions, or an error if the request failed.
43
-
- completion: A closure that is being called when all chunks are delivered or uncrecoverable error occured
This function sends an images query to the OpenAI API and retrieves generated images in response. The Images Generation API enables you to create various images or graphics using OpenAI's powerful deep learning models.
0 commit comments