This is the Python SDK for IBM Foundation Models Studio to bring IBM Generative AI into Python programs and to also extend it with useful operations and types.
This is an early access library and requires invitation to use the technical preview of watsonx.ai. You can join the waitlist by visiting. https://www.ibm.com/products/watsonx-ai.
- Table of Contents
- Installation
- Gen AI Endpoint
- Examples
- Tips and Troubleshooting
- Extensions
- Support
- Contribution Guide
- Authors
pip install ibm-generative-ai
- [SSL Issue] If you run into "SSL_CERTIFICATE_VERIFY_FAILED" please run the following code snippet here: support.
By default, IBM Generative AI will automatically use the following API endpoint: https://workbench-api.res.ibm.com/v1/
. However, if you wish to target a different Gen AI API, you can do so by defining it with the api_endpoint
argument when you instansiate the Credentials
object.
Your .env
file:
GENAI_KEY=YOUR_GENAI_API_KEY
GENAI_API=https://workbench-api.res.ibm.com/v1/
import os
from dotenv import load_dotenv
from genai.model import Credentials
# make sure you have a .env file under genai root with
# GENAI_KEY=<your-genai-key>
# GENAI_API=<your-genai-api endpointy>
load_dotenv()
my_api_key = os.getenv("GENAI_KEY", None)
my_api_endpoint = os.getenv("GENAI_API", None)
# creds object
creds = Credentials(api_key=my_api_key, api_endpoint=my_api_endpoint)
# Now start using GenAI!
There are a number of examples you can try in the examples/user
directory.
Login to workbench.res.ibm.com and get your GenAI API key. Then, create a .env
file and assign the GENAI_KEY
value as:
GENAI_KEY=YOUR_GENAI_API_KEY
import os
from dotenv import load_dotenv
from genai.model import Credentials, Model
from genai.schemas import GenerateParams, ModelType
# make sure you have a .env file under genai root with
# GENAI_KEY=<your-genai-key>
load_dotenv()
api_key = os.getenv("GENAI_KEY", None)
# Using Python "with" context
print("\n------------- Example (Greetings)-------------\n")
# Instantiate the GENAI Proxy Object
params = GenerateParams(
decoding_method="sample",
max_new_tokens=10,
min_new_tokens=1,
stream=False,
temperature=0.7,
top_k=50,
top_p=1,
)
# creds object
creds = Credentials(api_key)
# model object
model = Model(ModelType.FLAN_UL2, params=params, credentials=creds)
greeting = "Hello! How are you?"
lots_of_greetings = [greeting] * 1000
num_of_greetings = len(lots_of_greetings)
num_said_greetings = 0
greeting1 = "Hello! How are you?"
# yields batch of results that are produced asynchronously and in parallel
for result in model.generate_async(lots_of_greetings):
if result is not None:
num_said_greetings += 1
print(f"[Progress {str(float(num_said_greetings/num_of_greetings)*100)}%]")
print(f"\t {result.input_text} --> {result.generated_text}")
If you are planning on sending a large number of prompts and using logging, you might want to re-direct genai logs to a file instead of stdout. Check the section Tips and TroubleShooting for further help.
import os
from dotenv import load_dotenv
from genai.model import Credentials, Model
from genai.schemas import GenerateParams, ModelType
# make sure you have a .env file under genai root with
# GENAI_KEY=<your-genai-key>
load_dotenv()
api_key = os.getenv("GENAI_KEY", None)
# Using Python "with" context
print("\n------------- Example (Greetings)-------------\n")
# Instantiate the GENAI Proxy Object
params = GenerateParams(
decoding_method="sample",
max_new_tokens=10,
min_new_tokens=1,
stream=False,
temperature=0.7,
top_k=50,
top_p=1,
)
# creds object
creds = Credentials(api_key)
# model object
model = Model(ModelType.FLAN_UL2, params=params, credentials=creds)
greeting1 = "Hello! How are you?"
greeting2 = "I am fine and you?"
# Call generate function
responses = model.generate_as_completed([greeting1, greeting2] * 4)
for response in responses:
print(f"Generated text: {response.generated_text}")
If you're building an application or example and would like to see the GENAI logs, you can enable them in the following way:
import logging
import os
# Most GENAI logs are at Debug level.
logging.basicConfig(level=os.environ.get("LOGLEVEL", "DEBUG"))
If you only want genai logs, or those logs at a specific level, you can set this using the following syntax:
logging.getLogger("genai").setLevel(logging.DEBUG)
Example log message from GENAI:
DEBUG:genai.model:Model Created: Model: google/flan-t5-xxl, endpoint: https://workbench-api.res.ibm.com/v1/
Example of directing genai logs to a file:
# create file handler which logs even debug messages
fh = logging.FileHandler('genai.log')
fh.setLevel(logging.DEBUG)
logging.getLogger("genai").addHandler(fh)
To learn more about logging in python, you can follow the tutorial here.
Since generating responses for a large number of prompts can be time-consuming and there could be unforeseen circumstances such as internet connectivity issues, here are some strategies to work with:
- Start with a small number of prompts to prototype the code. You can enable logging as described above for debugging during prototyping.
- Include exception handling in sensitive sections such as callbacks.
- Checkpoint/save prompts and received responses periodically.
- Check examples in
examples/user
directory and modify them for your needs.
def my_callback(result):
try:
...
except:
...
outputs = []
count = 0
for result in model.generate_async(prompts, callback=my_callback):
if result is not None:
print(result.input_text, " --> ", result.generated_text)
# check if prompts[count] and result.input_text are the same
outputs.append((result.input_text, result.generated_text))
# periodically save outputs to disk or some location
...
else:
# ... save failed prompts for retrying
count += 1
GenAI currently supports a langchain extension and more extensions are in the pipeline. Please reach out to us if you want support for some framework as an extension or want to design an extension yourself.
Install the langchain extension as follows:
pip install "ibm-generative-ai[langchain]"
Currently the langchain extension allows IBM Generative AI models to be wrapped as Langchain LLMs and translation between genai PromptPatterns and LangChain PromptTemplates. Below are sample snippets
import os
from dotenv import load_dotenv
import genai.extensions.langchain
from genai.extensions.langchain import LangChainInterface
from genai.schemas import GenerateParams, ModelType
from genai import Credentials, Model, PromptPattern
load_dotenv()
api_key = os.getenv("GENAI_KEY", None)
creds = Credentials(api_key)
params = GenerateParams(decoding_method="greedy")
# As LangChain Model
langchain_model = LangChainInterface(model=ModelType.FLAN_UL2, params=params, credentials=creds)
print(langchain_model("Answer this question: What is life?"))
# As GenAI Model
genai_model = Model(model=ModelType.FLAN_UL2, params=params, credentials=creds)
print(genai_model.generate(["Answer this question: What is life?"])[0].generated_text)
# GenAI prompt pattern to langchain PromptTemplate and vice versa
seed_pattern = PromptPattern.from_str("Answer this question: {{question}}")
template = seed_pattern.langchain.as_template()
pattern = PromptPattern.langchain.from_template(template)
print(langchain_model(template.format(question="What is life?")))
print(genai_model.generate([pattern.sub("question", "What is life?")])[0].generated_text)
Model types can be imported from the ModelType class. If you want to use a model that is not included in this class, you can pass it as a string as exemplified here.
Need help? Check out how to get support
Please read our contributing guide for details on our code of conduct and details on submitting pull requests.
- Onkar Bhardwaj, [email protected]
- Veronique Demers, [email protected]
- James Sutton, [email protected]
- Mirian Silva, [email protected]
- Mairead O'Neill, [email protected]
- Ja Young Lee, [email protected]
- Ana Fucs, [email protected]
- Lee Martie, [email protected]