You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradientai.APIConnectionError` is raised.
243
+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
244
244
245
245
When the API returns a non-success status code (that is, 4xx or 5xx
246
-
response), a subclass of `gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
246
+
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
247
247
248
-
All errors inherit from `gradientai.APIError`.
248
+
All errors inherit from `do_gradientai.APIError`.
249
249
250
250
```python
251
-
importgradientai
252
-
fromgradientaiimport GradientAI
251
+
importdo_gradientai
252
+
fromdo_gradientaiimport GradientAI
253
253
254
254
client = GradientAI()
255
255
@@ -263,12 +263,12 @@ try:
263
263
],
264
264
model="llama3.3-70b-instruct",
265
265
)
266
-
exceptgradientai.APIConnectionError as e:
266
+
exceptdo_gradientai.APIConnectionError as e:
267
267
print("The server could not be reached")
268
268
print(e.__cause__) # an underlying Exception, likely raised within httpx.
269
-
exceptgradientai.RateLimitError as e:
269
+
exceptdo_gradientai.RateLimitError as e:
270
270
print("A 429 status code was received; we should back off a bit.")
271
-
exceptgradientai.APIStatusError as e:
271
+
exceptdo_gradientai.APIStatusError as e:
272
272
print("Another non-200-range status code was received")
273
273
print(e.status_code)
274
274
print(e.response)
@@ -296,7 +296,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
296
296
You can use the `max_retries` option to configure or disable retry settings:
297
297
298
298
```python
299
-
fromgradientaiimport GradientAI
299
+
fromdo_gradientaiimport GradientAI
300
300
301
301
# Configure the default for all requests:
302
302
client = GradientAI(
@@ -322,7 +322,7 @@ By default requests time out after 1 minute. You can configure this with a `time
322
322
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
323
323
324
324
```python
325
-
fromgradientaiimport GradientAI
325
+
fromdo_gradientaiimport GradientAI
326
326
327
327
# Configure the default for all requests:
328
328
client = GradientAI(
@@ -382,7 +382,7 @@ if response.my_field is None:
382
382
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
@@ -398,9 +398,9 @@ completion = response.parse() # get the object that `chat.completions.create()`
398
398
print(completion.choices)
399
399
```
400
400
401
-
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
401
+
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
402
402
403
-
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
403
+
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
404
404
405
405
#### `.with_streaming_response`
406
406
@@ -470,7 +470,7 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
494
494
495
495
```py
496
-
fromgradientaiimport GradientAI
496
+
fromdo_gradientaiimport GradientAI
497
497
498
498
with GradientAI() as client:
499
499
# make requests here
@@ -521,8 +521,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
521
521
You can determine the version that is being used at runtime with:
0 commit comments