Skip to content

Implement A2UI parser to extract text parts and A2UI JSON parts separately#780

Merged
nan-yu merged 4 commits intogoogle:mainfrom
nan-yu:a2ui-parser
Mar 6, 2026
Merged

Implement A2UI parser to extract text parts and A2UI JSON parts separately#780
nan-yu merged 4 commits intogoogle:mainfrom
nan-yu:a2ui-parser

Conversation

@nan-yu
Copy link
Collaborator

@nan-yu nan-yu commented Mar 5, 2026

Description

It hides the A2UI delimiter as implementation details by injecting the LLM response format (text part and JSON part separated by the delimiter) into the system prompt and extracting them with a parser.

Pre-launch Checklist

If you need help, consider asking for advice on the discussion board.

The previous commit defines a default workflow rule, which defines the
LLM response with both a text part and an A2UI JSON part, separated by
a delimiter. The default workflow rule is injected into the system
prompt, which changes the rizzcharts response format.

This commit updates the send_a2ui_to_client_toolset.py to handle the new
response correctly by leveraging the parser.

It also updates the payload fixer to not depend on the A2UICatalog and
moves it outside of the schema package. Now the payload fixer only
parses the raw JSON string and tries to fix it by removing the trailing
comma. The parser can use the payload fixes to extract the JSON object.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the A2UI JSON parsing and validation logic by introducing a new a2ui.core.parser module. The previous A2uiPayloadFixer class has been removed, and its functionality, including parsing and fixing trailing commas, is now handled by new parse_response and parse_and_fix functions within the new parser module. The A2UI_DELIMITER constant and DEFAULT_WORKFLOW_RULES are introduced to standardize LLM response formatting, and the system prompt generation now incorporates these default rules.

Significant changes include updating the SendA2uiToClientToolset to use a new A2uiEventConverter and A2uiPartConverter for converting GenAI parts to A2A parts, which now supports both tool-based and text-based A2UI with catalog-aware validation. The agent_development.md documentation and various sample agent implementations (component_gallery, contact_lookup, contact_multiple_surfaces, restaurant_finder, rizzcharts) have been updated to reflect these new parsing and validation mechanisms, including using parse_response and catalog.validator.validate directly.

Review comments highlight critical issues in the sample agents: two instances of a bug in retry logic where a continue statement causes unnecessary retries instead of exiting with a valid empty JSON list, and multiple instances of missing A2UI payload validation before sending to the client, posing a security risk due to potential prompt injection. Additionally, the retry logic in stream methods was flagged for concatenating user queries directly into prompts, creating a prompt injection vulnerability by allowing attackers to override agent instructions.

)
is_valid = True

continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The continue statement here introduces a bug in the retry logic. When parsed_json_data is an empty list, it's considered a valid response, and is_valid is set to True. However, continue forces the loop to the next iteration, causing an unnecessary retry instead of exiting and returning the valid response. Removing continue will allow the logic to flow to the if is_valid: check and complete the task as intended.

"Assuming valid (e.g., 'no results'). ---"
)
is_valid = True
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The continue statement here introduces a bug in the retry logic. When parsed_json_data is an empty list, it's considered a valid response, and is_valid is set to True. However, continue forces the loop to the next iteration, causing an unnecessary retry instead of exiting and returning the valid response. Removing continue will allow the logic to flow to the if is_valid: check and complete the task as intended.

if json_data:
if isinstance(json_data, list):
for msg in json_data:
final_parts.append(create_a2ui_part(msg))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

This agent executor sends A2UI JSON payloads to the client without validation, which is a critical security vulnerability. Malicious payloads could be rendered if the LLM is compromised via prompt injection. Always validate A2UI payloads against the intended catalog/schema using catalog.validator.validate() before sending them to the client. Additionally, the isinstance(json_data, list) check is redundant as parse_response ensures json_data is always a list, and the else block is unreachable. This code block can be simplified.

           if json_data:
            for msg in json_data:
              final_parts.append(create_a2ui_part(msg))

f"Found {len(json_data)} messages. Creating individual DataParts."
)
for message in json_data:
final_parts.append(create_a2ui_part(message))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

This agent executor sends A2UI JSON payloads from LLM responses to the client without validation against the A2UI catalog/schema. This is a security risk as validation can be bypassed if the LLM is manipulated via prompt injection. Always validate A2UI payloads against the intended catalog/schema in the executor before sending them to the client. Additionally, the isinstance(json_data, list) check and its else block are redundant because parse_response ensures json_data is always a list. The if len(json_data) == 0: check is also unreachable. The logic can be simplified.

Suggested change
final_parts.append(create_a2ui_part(message))
if json_data:
logger.info(
f"Found {len(json_data)} messages. Creating individual DataParts."
)
for message in json_data:
final_parts.append(create_a2ui_part(message))
else:
logger.info("Received empty list. Skipping DataPart.")

" MUST be a JSON list of A2UI messages. Ensure the response is split by"
" '---a2ui_JSON---' and the JSON part is well-formed. Please retry the"
f" '{A2UI_DELIMITER}' and the JSON part is well-formed. Please retry the"
f" original request: '{query}'"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The retry logic in the stream method concatenates the original user query directly into a new prompt sent to the LLM. An attacker can craft a query that, when included in the retry prompt, overrides the agent's instructions (e.g., by including "Ignore previous instructions and..."). This is especially risky as the retry prompt uses authoritative language ("You MUST").

Remediation: Sanitize or quote the user query when including it in prompts. Ideally, use a structured format or separate the user input from the system instructions using clear delimiters that the LLM is trained to respect.

" valid response that strictly follows the A2UI JSON SCHEMA. The response"
" MUST be a JSON list of A2UI messages. Ensure the response is split by"
" '---a2ui_JSON---' and the JSON part is well-formed. Please retry the"
f" '{A2UI_DELIMITER}' and the JSON part is well-formed. Please retry the"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The retry logic in the stream method concatenates the original user query directly into a new prompt sent to the LLM. An attacker can craft a query that, when included in the retry prompt, overrides the agent's instructions. This is a common pattern for prompt injection.

Remediation: Sanitize or quote the user query when including it in prompts.

@nan-yu nan-yu merged commit aa23230 into google:main Mar 6, 2026
7 checks passed
@github-project-automation github-project-automation bot moved this from Todo to Done in A2UI Mar 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Extract a parser in the agent SDK to parse LLM response

2 participants