Skip to content

Commit ac5a5ac

Browse files
authored
Merge pull request MacPaw#160 from kalafus/MacPaw.completions_legacy_removed
remove legacy completions endpoint
2 parents cb9f985 + e39955c commit ac5a5ac

11 files changed

+1
-375
lines changed

README.md

-112
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,6 @@ This repository contains Swift community-maintained implementation over [OpenAI]
1515
- [Installation](#installation)
1616
- [Usage](#usage)
1717
- [Initialization](#initialization)
18-
- [Completions](#completions)
19-
- [Completions Streaming](#completions-streaming)
2018
- [Chats](#chats)
2119
- [Chats Streaming](#chats-streaming)
2220
- [Images](#images)
@@ -84,115 +82,6 @@ let openAI = OpenAI(configuration: configuration)
8482

8583
Once token you posses the token, and the instance is initialized you are ready to make requests.
8684

87-
### Completions
88-
89-
Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.
90-
91-
**Request**
92-
93-
```swift
94-
struct CompletionsQuery: Codable {
95-
/// ID of the model to use.
96-
public let model: Model
97-
/// The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
98-
public let prompt: String
99-
/// What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
100-
public let temperature: Double?
101-
/// The maximum number of tokens to generate in the completion.
102-
public let maxTokens: Int?
103-
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
104-
public let topP: Double?
105-
/// Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
106-
public let frequencyPenalty: Double?
107-
/// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
108-
public let presencePenalty: Double?
109-
/// Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
110-
public let stop: [String]?
111-
/// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
112-
public let user: String?
113-
}
114-
```
115-
116-
**Response**
117-
118-
```swift
119-
struct CompletionsResult: Codable, Equatable {
120-
public struct Choice: Codable, Equatable {
121-
public let text: String
122-
public let index: Int
123-
}
124-
125-
public let id: String
126-
public let object: String
127-
public let created: TimeInterval
128-
public let model: Model
129-
public let choices: [Choice]
130-
public let usage: Usage
131-
}
132-
```
133-
**Example**
134-
135-
```swift
136-
let query = CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?", temperature: 0, maxTokens: 100, topP: 1, frequencyPenalty: 0, presencePenalty: 0, stop: ["\\n"])
137-
openAI.completions(query: query) { result in
138-
//Handle result here
139-
}
140-
//or
141-
let result = try await openAI.completions(query: query)
142-
```
143-
144-
```
145-
(lldb) po result
146-
▿ CompletionsResult
147-
- id : "cmpl-6P9be2p2fQlwB7zTOl0NxCOetGmX3"
148-
- object : "text_completion"
149-
- created : 1671453146.0
150-
- model : OpenAI.Model.textDavinci_003
151-
▿ choices : 1 element
152-
▿ 0 : Choice
153-
- text : "\n\n42 is the answer to the ultimate question of life, the universe, and everything, according to the book The Hitchhiker\'s Guide to the Galaxy."
154-
- index : 0
155-
```
156-
157-
#### Completions Streaming
158-
159-
Completions streaming is available by using `completionsStream` function. Tokens will be sent one-by-one.
160-
161-
**Closures**
162-
```swift
163-
openAI.completionsStream(query: query) { partialResult in
164-
switch partialResult {
165-
case .success(let result):
166-
print(result.choices)
167-
case .failure(let error):
168-
//Handle chunk error here
169-
}
170-
} completion: { error in
171-
//Handle streaming error here
172-
}
173-
```
174-
175-
**Combine**
176-
177-
```swift
178-
openAI
179-
.completionsStream(query: query)
180-
.sink { completion in
181-
//Handle completion result here
182-
} receiveValue: { result in
183-
//Handle chunk here
184-
}.store(in: &cancellables)
185-
```
186-
187-
**Structured concurrency**
188-
```swift
189-
for try await result in openAI.completionsStream(query: query) {
190-
//Handle result here
191-
}
192-
```
193-
194-
Review [Completions Documentation](https://platform.openai.com/docs/api-reference/completions) for more info.
195-
19685
### Chats
19786

19887
Using the OpenAI Chat API, you can build your own applications with `gpt-3.5-turbo` to do things like:
@@ -933,7 +822,6 @@ Read more about Cosine Similarity [here](https://en.wikipedia.org/wiki/Cosine_si
933822
The library contains built-in [Combine](https://developer.apple.com/documentation/combine) extensions.
934823

935824
```swift
936-
func completions(query: CompletionsQuery) -> AnyPublisher<CompletionsResult, Error>
937825
func images(query: ImagesQuery) -> AnyPublisher<ImagesResult, Error>
938826
func embeddings(query: EmbeddingsQuery) -> AnyPublisher<EmbeddingsResult, Error>
939827
func chats(query: ChatQuery) -> AnyPublisher<ChatResult, Error>

Sources/OpenAI/OpenAI.swift

-9
Original file line numberDiff line numberDiff line change
@@ -65,14 +65,6 @@ final public class OpenAI: OpenAIProtocol {
6565
self.init(configuration: configuration, session: session as URLSessionProtocol)
6666
}
6767

68-
public func completions(query: CompletionsQuery, completion: @escaping (Result<CompletionsResult, Error>) -> Void) {
69-
performRequest(request: JSONRequest<CompletionsResult>(body: query, url: buildURL(path: .completions)), completion: completion)
70-
}
71-
72-
public func completionsStream(query: CompletionsQuery, onResult: @escaping (Result<CompletionsResult, Error>) -> Void, completion: ((Error?) -> Void)?) {
73-
performStreamingRequest(request: JSONRequest<CompletionsResult>(body: query.makeStreamable(), url: buildURL(path: .completions)), onResult: onResult, completion: completion)
74-
}
75-
7668
public func images(query: ImagesQuery, completion: @escaping (Result<ImagesResult, Error>) -> Void) {
7769
performRequest(request: JSONRequest<ImagesResult>(body: query, url: buildURL(path: .images)), completion: completion)
7870
}
@@ -225,7 +217,6 @@ extension OpenAI {
225217
typealias APIPath = String
226218
extension APIPath {
227219

228-
static let completions = "/v1/completions"
229220
static let embeddings = "/v1/embeddings"
230221
static let chats = "/v1/chat/completions"
231222
static let models = "/v1/models"

Sources/OpenAI/Public/Models/CompletionsQuery.swift

-56
This file was deleted.

Sources/OpenAI/Public/Models/CompletionsResult.swift

-42
This file was deleted.

Sources/OpenAI/Public/Models/Models/Models.swift

-13
Original file line numberDiff line numberDiff line change
@@ -80,19 +80,6 @@ public extension Model {
8080

8181
/// Snapshot of `gpt-3.5-turbo-16k` from June 13th 2023. Unlike `gpt-3.5-turbo-16k`, this model will not receive updates, and will be deprecated 3 months after a new version is released.
8282
static let gpt3_5Turbo_16k_0613 = "gpt-3.5-turbo-16k-0613"
83-
84-
// Completions
85-
86-
/// Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports inserting completions within text.
87-
static let textDavinci_003 = "text-davinci-003"
88-
/// Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning.
89-
static let textDavinci_002 = "text-davinci-002"
90-
/// Very capable, faster and lower cost than Davinci.
91-
static let textCurie = "text-curie-001"
92-
/// Capable of straightforward tasks, very fast, and lower cost.
93-
static let textBabbage = "text-babbage-001"
94-
/// Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.
95-
static let textAda = "text-ada-001"
9683

9784
// Speech
9885

Sources/OpenAI/Public/Protocols/OpenAIProtocol+Async.swift

-26
Original file line numberDiff line numberDiff line change
@@ -12,32 +12,6 @@ import Foundation
1212
@available(tvOS 13.0, *)
1313
@available(watchOS 6.0, *)
1414
public extension OpenAIProtocol {
15-
func completions(
16-
query: CompletionsQuery
17-
) async throws -> CompletionsResult {
18-
try await withCheckedThrowingContinuation { continuation in
19-
completions(query: query) { result in
20-
switch result {
21-
case let .success(success):
22-
return continuation.resume(returning: success)
23-
case let .failure(failure):
24-
return continuation.resume(throwing: failure)
25-
}
26-
}
27-
}
28-
}
29-
30-
func completionsStream(
31-
query: CompletionsQuery
32-
) -> AsyncThrowingStream<CompletionsResult, Error> {
33-
return AsyncThrowingStream { continuation in
34-
return completionsStream(query: query) { result in
35-
continuation.yield(with: result)
36-
} completion: { error in
37-
continuation.finish(throwing: error)
38-
}
39-
}
40-
}
4115

4216
func images(
4317
query: ImagesQuery

Sources/OpenAI/Public/Protocols/OpenAIProtocol+Combine.swift

-21
Original file line numberDiff line numberDiff line change
@@ -15,27 +15,6 @@ import Combine
1515
@available(watchOS 6.0, *)
1616
public extension OpenAIProtocol {
1717

18-
func completions(query: CompletionsQuery) -> AnyPublisher<CompletionsResult, Error> {
19-
Future<CompletionsResult, Error> {
20-
completions(query: query, completion: $0)
21-
}
22-
.eraseToAnyPublisher()
23-
}
24-
25-
func completionsStream(query: CompletionsQuery) -> AnyPublisher<Result<CompletionsResult, Error>, Error> {
26-
let progress = PassthroughSubject<Result<CompletionsResult, Error>, Error>()
27-
completionsStream(query: query) { result in
28-
progress.send(result)
29-
} completion: { error in
30-
if let error {
31-
progress.send(completion: .failure(error))
32-
} else {
33-
progress.send(completion: .finished)
34-
}
35-
}
36-
return progress.eraseToAnyPublisher()
37-
}
38-
3918
func images(query: ImagesQuery) -> AnyPublisher<ImagesResult, Error> {
4019
Future<ImagesResult, Error> {
4120
images(query: query, completion: $0)

Sources/OpenAI/Public/Protocols/OpenAIProtocol.swift

-35
Original file line numberDiff line numberDiff line change
@@ -9,41 +9,6 @@ import Foundation
99

1010
public protocol OpenAIProtocol {
1111

12-
/**
13-
This function sends a completions query to the OpenAI API and retrieves generated completions in response. The Completions API enables you to build applications using OpenAI's language models, like the powerful GPT-3.
14-
15-
Example:
16-
```
17-
let query = CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?")
18-
openAI.completions(query: query) { result in
19-
//Handle result here
20-
}
21-
```
22-
23-
- Parameters:
24-
- query: A `CompletionsQuery` object containing the input parameters for the API request. This includes the prompt, model, temperature, max tokens, and other settings.
25-
- completion: A closure which receives the result when the API request finishes. The closure's parameter, `Result<CompletionsResult, Error>`, will contain either the `CompletionsResult` object with the generated completions, or an error if the request failed.
26-
**/
27-
func completions(query: CompletionsQuery, completion: @escaping (Result<CompletionsResult, Error>) -> Void)
28-
29-
/**
30-
This function sends a completions query to the OpenAI API and retrieves generated completions in response. The Completions API enables you to build applications using OpenAI's language models, like the powerful GPT-3. The result is returned by chunks.
31-
32-
Example:
33-
```
34-
let query = CompletionsQuery(model: .textDavinci_003, prompt: "What is 42?")
35-
openAI.completions(query: query) { result in
36-
//Handle result here
37-
}
38-
```
39-
40-
- Parameters:
41-
- query: A `CompletionsQuery` object containing the input parameters for the API request. This includes the prompt, model, temperature, max tokens, and other settings.
42-
- onResult: A closure which receives the result when the API request finishes. The closure's parameter, `Result<CompletionsResult, Error>`, will contain either the `CompletionsResult` object with the generated completions, or an error if the request failed.
43-
- completion: A closure that is being called when all chunks are delivered or uncrecoverable error occured
44-
**/
45-
func completionsStream(query: CompletionsQuery, onResult: @escaping (Result<CompletionsResult, Error>) -> Void, completion: ((Error?) -> Void)?)
46-
4712
/**
4813
This function sends an images query to the OpenAI API and retrieves generated images in response. The Images Generation API enables you to create various images or graphics using OpenAI's powerful deep learning models.
4914

0 commit comments

Comments
 (0)