You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/// Ask the API to complete the prompt(s) using the specified request. This is non-streaming, so it will wait until the API returns the full result.
38
38
/// </summary>
39
39
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultCompletionRequestArgs"/>.</param>
40
-
/// <returns>Asynchronously returns the completion result. Look in its <see cref="CompletionResult.Choices"/> property for the completions.</returns>
40
+
/// <returns>Asynchronously returns the completion result. Look in its <see cref="CompletionResult.Completions"/> property for the completions.</returns>
@@ -83,7 +83,7 @@ public async Task<CompletionResult> CreateCompletionAsync(CompletionRequest requ
83
83
/// </summary>
84
84
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultCompletionRequestArgs"/>.</param>
85
85
/// <param name="numOutputs">Overrides <see cref="CompletionRequest.NumChoicesPerPrompt"/> as a convenience.</param>
86
-
/// <returns>Asynchronously returns the completion result. Look in its <see cref="CompletionResult.Choices"/> property for the completions, which should have a length equal to <paramref name="numOutputs"/>.</returns>
86
+
/// <returns>Asynchronously returns the completion result. Look in its <see cref="CompletionResult.Completions"/> property for the completions, which should have a length equal to <paramref name="numOutputs"/>.</returns>
@@ -94,17 +94,17 @@ public Task<CompletionResult> CreateCompletionsAsync(CompletionRequest request,
94
94
/// Ask the API to complete the prompt(s) using the specified parameters. This is non-streaming, so it will wait until the API returns the full result. Any non-specified parameters will fall back to default values specified in <see cref="DefaultCompletionRequestArgs"/> if present.
95
95
/// </summary>
96
96
/// <param name="prompt">The prompt to generate from</param>
97
-
/// <param name="model">The model to use. You can use <see cref="ModelsEndpoint.GetModelsAsync"/> to see all of your available models, or use a standard model like <see cref="Model.DavinciText"/>.</param>
97
+
/// <param name="model">The model to use. You can use <see cref="ModelsEndpoint.GetModelsAsync()"/> to see all of your available models, or use a standard model like <see cref="Model.DavinciText"/>.</param>
98
98
/// <param name="max_tokens">How many tokens to complete to. Can return fewer if a stop sequence is hit.</param>
99
99
/// <param name="temperature">What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or <paramref name="top_p"/> but not both.</param>
100
100
/// <param name="top_p">An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or <paramref name="temperature"/> but not both.</param>
101
101
/// <param name="numOutputs">How many different choices to request for each prompt.</param>
102
102
/// <param name="presencePenalty">The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse.</param>
103
103
/// <param name="frequencyPenalty">The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse.</param>
104
-
/// <param name="logProbs">Include the log probabilities on the logprobs most likely tokens, which can be found in <see cref="CompletionResult.Choices"/> -> <see cref="Choice.Logprobs"/>. So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.</param>
104
+
/// <param name="logProbs">Include the log probabilities on the logprobs most likely tokens, which can be found in <see cref="CompletionResult.Completions"/> -> <see cref="Choice.Logprobs"/>. So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.</param>
105
105
/// <param name="echo">Echo back the prompt in addition to the completion.</param>
106
106
/// <param name="stopSequences">One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.</param>
107
-
/// <returns>Asynchronously returns the completion result. Look in its <see cref="CompletionResult.Choices"/> property for the completions.</returns>
107
+
/// <returns>Asynchronously returns the completion result. Look in its <see cref="CompletionResult.Completions"/> property for the completions.</returns>
@@ -232,11 +232,11 @@ public async Task StreamCompletionAsync(CompletionRequest request, Action<Comple
232
232
}
233
233
234
234
/// <summary>
235
-
/// Ask the API to complete the prompt(s) using the specified request, and stream the results to the <paramref name="resultHandler"/> as they come in.
235
+
/// Ask the API to complete the prompt(s) using the specified request, and stream the results as they come in.
236
236
/// If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use <see cref="StreamCompletionAsync(CompletionRequest, Action{CompletionResult})"/> instead.
237
237
/// </summary>
238
238
/// <param name="request">The request to send to the API. This does not fall back to default values specified in <see cref="DefaultCompletionRequestArgs"/>.</param>
239
-
/// <returns>An async enumerable with each of the results as they come in. See <seealso cref="https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8#asynchronous-streams"/> for more details on how to consume an async enumerable.</returns>
239
+
/// <returns>An async enumerable with each of the results as they come in. See <see href="https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8#asynchronous-streams"/> for more details on how to consume an async enumerable.</returns>
@@ -293,14 +293,14 @@ public async IAsyncEnumerable<CompletionResult> StreamCompletionEnumerableAsync(
293
293
/// If you are not using C# 8 supporting async enumerables or if you are using the .NET Framework, you may need to use <see cref="StreamCompletionAsync(CompletionRequest, Action{CompletionResult})"/> instead.
294
294
/// </summary>
295
295
/// <param name="prompt">The prompt to generate from</param>
296
-
/// <param name="model">The model to use. You can use <see cref="ModelsEndpoint.GetModelsAsync"/> to see all of your available models, or use a standard model like <see cref="Model.DavinciText"/>.</param>
296
+
/// <param name="model">The model to use. You can use <see cref="ModelsEndpoint.GetModelsAsync()"/> to see all of your available models, or use a standard model like <see cref="Model.DavinciText"/>.</param>
297
297
/// <param name="max_tokens">How many tokens to complete to. Can return fewer if a stop sequence is hit.</param>
298
298
/// <param name="temperature">What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or <paramref name="top_p"/> but not both.</param>
299
299
/// <param name="top_p">An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or <paramref name="temperature"/> but not both.</param>
300
300
/// <param name="numOutputs">How many different choices to request for each prompt.</param>
301
301
/// <param name="presencePenalty">The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse.</param>
302
302
/// <param name="frequencyPenalty">The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse.</param>
303
-
/// <param name="logProbs">Include the log probabilities on the logprobs most likely tokens, which can be found in <see cref="CompletionResult.Choices"/> -> <see cref="Choice.Logprobs"/>. So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.</param>
303
+
/// <param name="logProbs">Include the log probabilities on the logprobs most likely tokens, which can be found in <see cref="CompletionResult.Completions"/> -> <see cref="Choice.Logprobs"/>. So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.</param>
304
304
/// <param name="echo">Echo back the prompt in addition to the completion.</param>
305
305
/// <param name="stopSequences">One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.</param>
306
306
/// <returns>An async enumerable with each of the results as they come in. See <see href="https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8#asynchronous-streams">the C# docs</see> for more details on how to consume an async enumerable.</returns>
Copy file name to clipboardexpand all lines: OpenAI_API/Completions/CompletionRequest.cs
+4-4
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ namespace OpenAI_API
15
15
publicclassCompletionRequest
16
16
{
17
17
/// <summary>
18
-
/// ID of the model to use. You can use <see cref="ModelsEndpoint.GetModelsAsync"/> to see all of your available models, or use a standard model like <see cref="Model.DavinciText"/>.
18
+
/// ID of the model to use. You can use <see cref="ModelsEndpoint.GetModelsAsync()"/> to see all of your available models, or use a standard model like <see cref="Model.DavinciText"/>.
19
19
/// </summary>
20
20
[JsonProperty("model")]
21
21
publicstringModel{get;set;}
@@ -103,7 +103,7 @@ public string Prompt
103
103
publicboolStream{get;internalset;}=false;
104
104
105
105
/// <summary>
106
-
/// Include the log probabilities on the logprobs most likely tokens, which can be found in <see cref="CompletionResult.Choices"/> -> <see cref="Choice.Logprobs"/>. So for example, if logprobs is 5, the API will return a list of the 5 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5.
106
+
/// Include the log probabilities on the logprobs most likely tokens, which can be found in <see cref="CompletionResult.Completions"/> -> <see cref="Choice.Logprobs"/>. So for example, if logprobs is 5, the API will return a list of the 5 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5.
107
107
/// </summary>
108
108
[JsonProperty("logprobs")]
109
109
publicint?Logprobs{get;set;}
@@ -209,15 +209,15 @@ public CompletionRequest(params string[] prompts)
209
209
/// Creates a new <see cref="CompletionRequest"/> with the specified parameters
210
210
/// </summary>
211
211
/// <param name="prompt">The prompt to generate from</param>
212
-
/// <param name="model">The model to use. You can use <see cref="ModelsEndpoint.GetModelsAsync"/> to see all of your available models, or use a standard model like <see cref="Model.DavinciText"/>.</param>
212
+
/// <param name="model">The model to use. You can use <see cref="ModelsEndpoint.GetModelsAsync()"/> to see all of your available models, or use a standard model like <see cref="Model.DavinciText"/>.</param>
213
213
/// <param name="max_tokens">How many tokens to complete to. Can return fewer if a stop sequence is hit.</param>
214
214
/// <param name="temperature">What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. It is generally recommend to use this or <paramref name="top_p"/> but not both.</param>
215
215
/// <param name="suffix">The suffix that comes after a completion of inserted text</param>
216
216
/// <param name="top_p">An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend to use this or <paramref name="temperature"/> but not both.</param>
217
217
/// <param name="numOutputs">How many different choices to request for each prompt.</param>
218
218
/// <param name="presencePenalty">The scale of the penalty applied if a token is already present at all. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse.</param>
219
219
/// <param name="frequencyPenalty">The scale of the penalty for how often a token is used. Should generally be between 0 and 1, although negative numbers are allowed to encourage token reuse.</param>
220
-
/// <param name="logProbs">Include the log probabilities on the logprobs most likely tokens, which can be found in <see cref="CompletionResult.Choices"/> -> <see cref="Choice.Logprobs"/>. So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.</param>
220
+
/// <param name="logProbs">Include the log probabilities on the logprobs most likely tokens, which can be found in <see cref="CompletionResult.Completions"/> -> <see cref="Choice.Logprobs"/>. So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is supplied, the API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.</param>
221
221
/// <param name="echo">Echo back the prompt in addition to the completion.</param>
222
222
/// <param name="stopSequences">One or more sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.</param>
0 commit comments