Skip to content

Commit 996339c

Browse files
author
Julien Salinas
committed
Add docs for text generation.
1 parent 8cfdeaf commit 996339c

File tree

1 file changed

+25
-3
lines changed

1 file changed

+25
-3
lines changed

README.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -97,14 +97,36 @@ The above command returns a JSON object.
9797

9898
### Classification Endpoint
9999

100-
Call the `classification()` method and pass 3 arguments:
100+
Call the `classification()` method and pass the following arguments:
101101

102102
1. The text you want to classify, as a string
103103
1. The candidate labels for your text, as a list of strings
104-
1. Whether the classification should be multi-class or not, as a boolean
104+
1. (Optional) `multi_class`: Whether the classification should be multi-class or not, as a boolean. Defaults to true.
105105

106106
```python
107-
client.classification("<Your block of text>", ["label 1", "label 2", "..."], True|False)
107+
client.classification("<Your block of text>", ["label 1", "label 2", "..."])
108+
```
109+
110+
The above command returns a JSON object.
111+
112+
### Text Generation Endpoint
113+
114+
Call the `generation()` method and pass 3 arguments:
115+
116+
1. The block of text that starts the generated text, as a string. 1200 tokens maximum.
117+
1. (Optional) `min_length`: The minimum number of tokens that the generated text should contain, as an integer. The size of the generated text should not exceed 256 tokens on a CPU plan and 1024 tokens on GPU plan. If `length_no_input` is false, the size of the generated text is the difference between `min_length` and the length of your input text. If `length_no_input` is true, the size of the generated text simply is `min_length`. Defaults to 10.
118+
1. (Optional) `max_length`: The maximum number of tokens that the generated text should contain, as an integer. The size of the generated text should not exceed 256 tokens on a CPU plan and 1024 tokens on GPU plan. If `length_no_input` is false, the size of the generated text is the difference between `max_length` and the length of your input text. If `length_no_input` is true, the size of the generated text simply is `max_length`. Defaults to 50.
119+
1. (Optional) `length_no_input`: Whether `min_length` and `max_length` should not include the length of the input text, as a boolean. If false, `min_length` and `max_length` include the length of the input text. If true, min_length and `max_length` don't include the length of the input text. Defaults to false.
120+
1. (Optional) `end_sequence`: A specific token that should be the end of the generated sequence, as a string. For example if could be `.` or `\n` or `###` or anything else below 10 characters.
121+
1. (Optional) `remove_input`: Whether you want to remove the input text form the result, as a boolean. Defaults to false.
122+
1. (Optional) `top_k`: The number of highest probability vocabulary tokens to keep for top-k-filtering, as an integer. Maximum 1000 tokens. Defaults to 0.
123+
1. (Optional) `top_p`: If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. This is a float. Should be between 0 and 1. Defaults to 0.7.
124+
1. (Optional) `temperature`: The value used to module the next token probabilities, as a float. Should be between 0 and 1. Defaults to 1.
125+
1. (Optional) `repetition_penalty`: The parameter for repetition penalty, as a float. 1.0 means no penalty. Defaults to 1.0.
126+
1. (Optional) `length_penalty`: Exponential penalty to the length, as a float. 1.0 means no penalty. Set to values < 1.0 in order to encourage the model to generate shorter sequences, or to a value > 1.0 in order to encourage the model to produce longer sequences. Defaults to 1.0.
127+
128+
```python
129+
client.generation("<Your input text>")
108130
```
109131

110132
The above command returns a JSON object.

0 commit comments

Comments
 (0)