Skip to content

Conversation

@Nakul-Rajpal
Copy link
Contributor

Wrapped the Python Library LLM (for OpenAI, Anthropic’s Claude, Google’s Gemini etc) as a garak generator.

Fixes Issue #463

Wrapped the Python Library LLM (for  OpenAI, Anthropic’s Claude, Google’s Gemini etc) as a garak generator.
@Nakul-Rajpal
Copy link
Contributor Author

@leondz Please check

@leondz
Copy link
Collaborator

leondz commented Sep 26, 2025

thanks, will take a look!

@leondz
Copy link
Collaborator

leondz commented Sep 26, 2025

@Nakul-Rajpal This isn't passing tests - can you amend?

@Nakul-Rajpal
Copy link
Contributor Author

@leondz It should be good now? The tests were failing due to me not adding the llm library to the requirements so it ran without the module.

@Nakul-Rajpal
Copy link
Contributor Author

Nakul-Rajpal commented Sep 29, 2025

@leondz Should be ready to go now; really sorry about the errors before I request to do another issue I should familiarize myself with the repo further.

@leondz
Copy link
Collaborator

leondz commented Sep 29, 2025

Yeah, it's in the review queue, thank you

Copy link
Collaborator

@jmartin-tech jmartin-tech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a great start. I noted a few edge case and a mismatch in how a system_prompt is handled.

Please take a look, happy to offer further detail or answer question about how things flow.

Comment on lines 74 to 75
if self.system:
prompt_kwargs["system"] = self.system
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current system prompt support in garak is tied to the conversation passed as part of prompt. The DEFAULT_PARAMS entry here should likely be removed in favor of extracting the system_prompt from the prompt via prompt.last_message("system"). That is if passing a conversation that includes the system message would not apply it.

"max_tokens": None,
"top_p": None,
"stop": [],
"system": None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove, the system prompt is set via the run configuration and pass to generators as part of the prompt conversation.

Suggested change
"system": None,

Comment on lines 76 to 83
if self.max_tokens is not None:
prompt_kwargs["max_tokens"] = self.max_tokens
if self.temperature is not None:
prompt_kwargs["temperature"] = self.temperature
if self.top_p is not None:
prompt_kwargs["top_p"] = self.top_p
if self.stop:
prompt_kwargs["stop"] = self.stop
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None == False and all keys defined in DEFAULT_PARAMS will exist on self.

Suggested change
if self.max_tokens is not None:
prompt_kwargs["max_tokens"] = self.max_tokens
if self.temperature is not None:
prompt_kwargs["temperature"] = self.temperature
if self.top_p is not None:
prompt_kwargs["top_p"] = self.top_p
if self.stop:
prompt_kwargs["stop"] = self.stop
if self.max_tokens:
prompt_kwargs["max_tokens"] = self.max_tokens
if self.temperature:
prompt_kwargs["temperature"] = self.temperature
if self.top_p:
prompt_kwargs["top_p"] = self.top_p
if self.stop:
prompt_kwargs["stop"] = self.stop


This calls model.prompt() once per generation and materializes the text().
"""
text_prompt = prompt.last_message().text
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not grab out the full conversation. There is an existing helper function in the base class Generator._conversation_to_list() that will format the garak Conversation object as a list of dictionaries meeting the HuggingFace and OpenAI conversation list. Looking at how the llm library handles what it considers to be conversation I don't know if there is a way to load a prefilled history in a similar pattern to how chat completions APIs for other generators are working.

For best adoption, this generator should at least validate the conversation has at most one user and one system message to know if the prompt passed will be fully processed during inference.

Comment on lines 39 to 40
"temperature": None,
"max_tokens": None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

temperature and max_tokens are already in Generator.DEFAULT_PARAMS is there a reason to include here?

Suggested change
"temperature": None,
"max_tokens": None,

Copy link
Collaborator

@leondz leondz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good. Requests to add a pattern supporting parallelisation, some renaming, and vars ensuring test consistency

"system": None,
}

generator_family_name = "LLM"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this might be better as lower case - that's how the tool is described


try:
# Resolve the llm model; fall back to llm's default if no name given
self.model = llm.get_model(self.name) if self.name else llm.get_model()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rename self.model to self.target to be consistent with overall garak nomenclature? llm supports both systems and models so the change fits this use-case fine too

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's worth implementing the _load_client() / _clear_client() pattern here to support parallelisation - see openai.OpenAICompatible for an example

Comment on lines +1 to +2
# SPDX-FileCopyrightText: Portions Copyright (c) 2025 NVIDIA CORPORATION &
# AFFILIATES. All rights reserved.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# SPDX-FileCopyrightText: Portions Copyright (c) 2025 NVIDIA CORPORATION &
# AFFILIATES. All rights reserved.
# SPDX-FileCopyrightText: Portions Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should land #1199 before landing this, and then move this PR to the deferred loading pattern

def test_generate_returns_message(cfg, fake_llm):
gen = LLMGenerator(name="alias", config_root=cfg)

conv = Conversation([Turn("user", Message(text="ping"))])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
conv = Conversation([Turn("user", Message(text="ping"))])
test_txt = "ping"
conv = Conversation([Turn("user", Message(text=test_txt))])

assert out[0].text == "OK_FAKE"

prompt_text, kwargs = fake_llm.calls[0]
assert prompt_text == "ping"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert prompt_text == "ping"
assert prompt_text == test_txt

Comment on lines 82 to 86
gen.temperature = 0.2
gen.max_tokens = 64
gen.top_p = 0.9
gen.stop = ["\n\n"]
gen.system = "you are testy"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use vars for these values (and the checks later)

Comment on lines 92 to 96
assert kwargs["temperature"] == 0.2
assert kwargs["max_tokens"] == 64
assert kwargs["top_p"] == 0.9
assert kwargs["stop"] == ["\n\n"]
assert kwargs["system"] == "you are testy"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vars here. could do a list assignment / check likex,y = 1,2 for brevity

class BoomModel:
def prompt(self, *a, **k):
raise RuntimeError("boom")
monkeypatch.setattr(llm, "get_model", lambda *a, **k: BoomModel())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice

@leondz leondz added the generators Interfaces with LLMs label Oct 12, 2025
Copy link
Collaborator

@jmartin-tech jmartin-tech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some tweaks to ensure consistent behaviour.

Code suggestions are untested.

Comment on lines +52 to +56
self._load_client()

super().__init__(self.name, config_root=config_root)

self._clear_client()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be best to call super().__init__() before calling _load_client() to ensure the end object state is not impacted. Also no need to call _clear_client() during __init__().

Suggested change
self._load_client()
super().__init__(self.name, config_root=config_root)
self._clear_client()
super().__init__(self.name, config_root=config_root)
self._load_client()

Calls model.prompt() with the prompt text and relays the response. Per-provider
options and API keys are all handled by `llm` (e.g., `llm keys set openai`).

Set --model_name to the `llm` model id or alias (e.g., "gpt-4o-mini",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Set --model_name to the `llm` model id or alias (e.g., "gpt-4o-mini",
Set --target_name to the `llm` model id or alias (e.g., "gpt-4o-mini",

Comment on lines +93 to +98
if assistant_turns:
raise ValueError("llm generator does not accept assistant turns")
if len(system_turns) > 1:
raise ValueError("llm generator supports at most one system turn")
if len(user_turns) != 1:
raise ValueError("llm generator requires exactly one user turn")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsupported prompts in _call_model() are currently expected to return None to avoid early termination of the test run. While I understand the thought process of using raise here, currently logging the reason for skipping the prompt would align better.

import logging
Suggested change
if assistant_turns:
raise ValueError("llm generator does not accept assistant turns")
if len(system_turns) > 1:
raise ValueError("llm generator supports at most one system turn")
if len(user_turns) != 1:
raise ValueError("llm generator requires exactly one user turn")
if assistant_turns:
logging.debug("llm generator does not accept assistant turns")
return [None] * generations_this_call
if len(system_turns) > 1:
logging.debug("llm generator supports at most one system turn")
return [None] * generations_this_call
if len(user_turns) != 1:
logging.debug("llm generator requires exactly one user turn")
return [None] * generations_this_call

Comment on lines +103 to +109
prompt_kwargs = {
key: getattr(self, key)
for key in ("max_tokens", "temperature", "top_p")
if getattr(self, key) is not None
}
if self.stop:
prompt_kwargs["stop"] = self.stop
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this inspect the accepted arguments to self.target.prompt() vs a hard coded list here? Something similar exists in the OpenAICompatible class, where we collect all options set on the generator that the target API accepts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

generators Interfaces with LLMs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants