Introduce parse_response_to_parts helper#785
Conversation
There was a problem hiding this comment.
Code Review
This pull request introduces a new parse_response_to_parts helper function to centralize and streamline the process of converting LLM responses, potentially containing A2UI JSON, into structured A2A Part objects. The change updates the agent development documentation to recommend this helper and refactors several sample agents and their executors to utilize it, thereby removing duplicated parsing logic from the executors. However, the review comments identify three issues: the new helper's validation logic incorrectly skips validation for empty JSON objects, there's a risk of Personally Identifiable Information (PII) exposure if sensitive data is logged during parsing exceptions, and a security vulnerability exists in one sample agent where user-supplied input can be maliciously injected as A2UI payloads due to unsafe string concatenation before parsing.
| parts.append(create_a2ui_part(json_data)) | ||
|
|
||
| except Exception as e: | ||
| logger.warning(f"Failed to parse or validate A2UI response: {e}") |
There was a problem hiding this comment.
The parse_response_to_parts function logs the exception object e when parsing or validation fails. If the LLM response contains Personally Identifiable Information (PII) and fails validation, the jsonschema.ValidationError (or other exceptions) may include the sensitive data in its string representation, which is then written to the logs. This can lead to unauthorized exposure of sensitive information in log files.
| f"Message sent to {contact_name}\n{A2UI_DELIMITER}\n{json_content}" | ||
| ) | ||
|
|
||
| final_parts = parse_response_to_parts(final_response_content) |
There was a problem hiding this comment.
The final_response_content variable is constructed by concatenating user-supplied data (contact_name) into a JSON string via a string replacement (on line 221). This constructed string is then parsed by parse_response_to_parts on line 243. An attacker can provide a malicious contact_name (e.g., containing JSON delimiters and new objects) to inject arbitrary A2UI payloads into the response sent to the client. This could be used to perform various client-side attacks depending on the capabilities of the A2UI components.
5c6110e to
82b589a
Compare
In the samples, agent.py parses the LLM response and extracts the text part and A2UI JSON part. It also validates the JSON part. Then it sends the original LLM response back to agent_executor.py, which parses again. This commit consolidates the logic to only parse and validate once by leveraging a parse_response_to_parts helper. - tested: the orchestrator sample and all sub-agents worked from end-to-end.
Description
In the samples, agent.py parses the LLM response and extracts the text part and A2UI JSON part. It also validates the JSON part. Then it sends the original LLM response back to agent_executor.py, which parses again.
This commit consolidates the logic to only parse and validate once by leveraging a parse_response_to_parts helper.
Pre-launch Checklist
If you need help, consider asking for advice on the discussion board.