Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
383 changes: 383 additions & 0 deletions a2as.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,383 @@
manifest:
version: "0.1.2"
schema: https://a2as.org/cert/schema
subject:
name: vibing-ai/verifact
source: https://github.com/vibing-ai/verifact
branch: main
commit: "bb8b1207"
scope: [src/utils/search/search_tools.py, src/verifact_agents/claim_detector.py, src/verifact_agents/evidence_hunter.py,
src/verifact_agents/verdict_writer.py, src/verifact_manager.py]
issued:
by: A2AS.org
at: '2026-01-26T16:15:04Z'
signatures:
digest: sha256:IHhBN6iPzuCY4R6O9XhWajO4H0jo2M6gMGU49KTeXIs
key: ed25519:Nykd_HAUcf3GQU_Bh6dF1ph4dD8YuAIrS88W6tU5E9I
sig: ed25519:k78UrtqhEhlyRsoolQzFz6d22tVb_TZozp0PFs9w2GJ7kU6bAKhiyBZHEUoEhbSm1DEtC-nDqowZu4_sFXkqDw

agents:
agent.EvidenceHunter.evidence_hunter_agent:
type: class
models: [EVIDENCE_HUNTER_MODEL]
tools: [tools]
params:
class: EvidenceHunter
name: EvidenceHunter
instructions: [prompt]
claim_detector_agent:
type: instance
models: [gpt-4o-mini]
params:
name: ClaimDetector
instructions: [You are an intelligent claim detection agent designed to identify factual claims from text that require
verification., 'IMPORTANT: Due to system constraints, you can only return a maximum of 2 claims per request. Focus
on the most important and check-worthy claims.', 'Your task is to analyze input text and identify factual claims
that should be fact-checked. You must distinguish between:', '- FACTUAL CLAIMS: Statements that make specific, verifiable
assertions about reality', '- OPINIONS: Personal views, beliefs, or subjective statements', '- QUESTIONS: Interrogative
statements', '- COMMANDS: Instructions or requests', '- RECOMMENDATIONS: Suggestions, advice, or calls for action
that cannot be verified', '## What Makes a Claim Worth Fact-Checking:', '1. **Specificity**: Contains concrete facts,
numbers, dates, or specific assertions', '2. **Verifiability**: Can be proven true or false with evidence', '3.
**Public Interest**: Matters to public discourse, health, safety, or policy', '4. **Impact**: Could influence decisions,
beliefs, or actions if believed', '## What is NOT a Factual Claim:', '- "The results suggest that further research
is needed" (recommendation)', '- "I think this is a good idea" (opinion)', '- "What do you think about this?" (question)',
'- "Please review this document" (command)', '- "The weather might be nice tomorrow" (speculation)', '- "We should
investigate this further" (suggestion)', '- "More studies are needed" (recommendation)', '## What IS a Factual Claim:',
'- "The study found that 75% of participants showed improvement" (specific result)', '- "The researchers noted that
the sample size was small" (factual observation)', '- "Company X reported $2.3 billion in revenue" (specific financial
data)', '- "The new policy will affect 1.2 million people" (specific impact)', '## Domain Classification:', 'Automatically
classify claims into relevant domains:', '- **Science**: Research, studies, scientific findings, medical claims',
'- **Health**: Medical treatments, health effects, disease information', '- **Technology**: Software, hardware, tech
products, digital claims', '- **Statistics**: Numbers, percentages, data, surveys, polls', '- **Politics**: Government,
policy, political statements, elections', '- **Business**: Companies, economy, financial claims, market data', '-
**Environment**: Climate, weather, environmental effects', '- **Other**: General factual claims not fitting above
categories', '## Entity Extraction:', 'Identify relevant named entities, organizations, people, places, dates, and
key concepts that are central to the claim.', '## Context Extraction:', 'For each claim, provide relevant surrounding
context that helps understand the claim''s meaning and significance.', '## Output Format:', 'You MUST return a list
of Pydantic `Claim` objects. Each `Claim` object should have the following fields:', '- text: The factual claim
text (normalized and cleaned). Max length: 150 characters.', '- context: Surrounding context that helps understand
the claim. Max length: 200 characters.', '- check_worthiness: Score from 0.0 to 1.0.', '- domain: The relevant domain
category.', '- confidence: Your confidence in the claim being factual (0.0-1.0).', '- entities: List of relevant
entities mentioned (strings).', '## Key Rule:', 'Only extract claims that can be factually verified. If a statement
is a recommendation, opinion, suggestion, or speculation, do NOT include it as a claim.', 'Focus on claims that
are specific, verifiable, and matter to public discourse.']
verdict_writer_agent:
type: instance
models: [VERDICT_WRITER_MODEL]
params:
name: VerdictWriter
output_type: Verdict
instructions: [You are a verdict writing agent. Your job is to analyze evidence and determine, 'the accuracy of a claim,
providing a detailed explanation and citing sources.', 'Your verdict should:', '1. Classify the claim as true, false,
partially true, or unverifiable', 2. Assign a confidence score (0-1), 3. Provide a detailed explanation of your
reasoning, 4. Cite all sources used, 5. Summarize key evidence, 'Guidelines for evidence assessment:', '- Base your
verdict solely on the provided evidence', '- Weigh contradicting evidence according to source credibility and relevance',
'- Consider the relevance score (0-1) as an indicator of how directly the evidence addresses the claim', '- Treat
higher relevance and credibility sources as more authoritative', '- Evaluate stance ("supporting", "contradicting",
"neutral") for each piece of evidence', '- When sources conflict, prefer more credible, more recent, and more directly
relevant sources', '- Identify consensus among multiple independent sources as especially strong evidence', 'Guidelines
for confidence scoring:', '- Assign high confidence (0.8-1.0) only when evidence is consistent, highly credible,
and comprehensive', '- Use medium confidence (0.5-0.79) when evidence is mixed or from fewer sources', '- Use low
confidence (0-0.49) when evidence is minimal, outdated, or from less credible sources', '- When evidence is insufficient,
label as "unverifiable" with appropriate confidence based on limitations', '- For partially true claims, explain
precisely which parts are true and which are false', 'Guidelines for explanations: Provide a 1-2 sentence summary
focusing on core evidence only', 'Your explanation must be:', '- Clear and accessible to non-experts', '- Factual
rather than judgmental', '- Politically neutral and unbiased', '- Properly cited with all sources attributed', '-
Transparent about limitations and uncertainty', 'When evidence is mixed or contradictory, clearly present the different
perspectives', and explain how you reached your conclusion based on the balance of evidence., 'For your output,
provide:', '- claim: The claim you are fact-checking', '- verdict: The verdict on the claim: true, false, partially
true, or unverifiable', '- confidence: A score from 0.0 to 1.0 indicating your confidence in the verdict', '- explanation:
A 1-2 sentence summary focusing on core evidence only', '- sources: A list of sources used to reach the verdict']

models:
EVIDENCE_HUNTER_MODEL:
type: variable
agents: [agent.EvidenceHunter.evidence_hunter_agent]
gpt-4o-mini:
type: literal
agents: [claim_detector_agent]
VERDICT_WRITER_MODEL:
type: variable
agents: [verdict_writer_agent]

tools:
process_claims:
type: decorator
params:
function: process_claims_tool
description: Process text to extract and analyze factual claims
serper_search:
type: decorator
params:
description: |-
Search the web using Serper.dev API to find current information on any topic.

Args:
query: The search query to find information about.
num_results: Number of results to return (1-10).
search_type: Type of search to perform: 'search', 'news', or 'images'.

Returns:
List of search results with information about each hit.
tools:
type: variable
agents: [agent.EvidenceHunter.evidence_hunter_agent]
params:
dynamic: "True"

imports:
Agent: agents.Agent
Any: typing.Any
asyncio: asyncio
BaseModel: pydantic.BaseModel
cl: chainlit
Claim: verifact_agents.claim_detector.Claim
claim_detector_agent: verifact_agents.claim_detector.claim_detector_agent
datetime: datetime.datetime
deduplicate_evidence: verifact_agents.evidence_hunter.deduplicate_evidence
Evidence: verifact_agents.evidence_hunter.Evidence
EvidenceHunter: verifact_agents.evidence_hunter.EvidenceHunter
Field: pydantic.Field
field_validator: pydantic.field_validator
function_tool: agents.function_tool
gen_trace_id: agents.gen_trace_id
get_search_tools: utils.search.search_tools.get_search_tools
html: html
httpx: httpx
Literal: typing.Literal
load_dotenv: dotenv.load_dotenv
logging: logging
os: os
Path: pathlib.Path
re: re
Runner: agents.Runner
setup_logging: utils.logging.logging_config.setup_logging
trace: agents.trace
Verdict: verifact_agents.verdict_writer.Verdict
verdict_writer_agent: verifact_agents.verdict_writer.verdict_writer_agent
VerifactManager: src.verifact_manager.VerifactManager
WebSearchTool: agents.WebSearchTool

functions:
__init__:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self, trust_sources_path, search_tools]
_deduplicate_claims:
type: sync
module: src.verifact_agents.claim_detector
args: [self, claims]
params:
returns: list[Claim]
_detect_claims:
type: async
module: src.verifact_manager
args: [self, text]
params:
returns: list[Claim]
_gather_evidence:
type: async
module: src.verifact_manager
args: [self, claims]
params:
returns: list[tuple]
_gather_evidence_for_claim:
type: async
module: src.verifact_manager
args: [self, claim]
params:
returns: list[Evidence]
_generate_all_verdicts:
type: async
module: src.verifact_manager
args: [self, claims_with_evidence]
params:
returns: list[Verdict]
_generate_verdict_for_claim:
type: async
module: src.verifact_manager
args: [self, claim, evidence]
params:
returns: Verdict
_get_diversity_tool_requirements:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self]
_get_serper_tool_requirements:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self]
_normalize_whitespace:
type: sync
module: src.verifact_agents.claim_detector
args: [self, text]
params:
returns: str
_parse_serper_results:
type: sync
module: src.utils.search.search_tools
args: [data, search_type, num_results]
params:
returns: list[dict]
_preprocess_text:
type: sync
module: src.verifact_agents.claim_detector
args: [self, text]
params:
returns: str
_sanitize_text:
type: sync
module: src.verifact_agents.claim_detector
args: [text]
params:
returns: str
_validate_text_input:
type: sync
module: src.verifact_agents.claim_detector
args: [text, min_length, max_length]
params:
returns: str
deduplicate_evidence:
type: sync
module: src.verifact_agents.evidence_hunter
args: [evidence_list]
params:
returns: list[Evidence]
detect_claims:
type: async
module: src.verifact_agents.claim_detector
args: [self, text, min_checkworthiness]
params:
returns: list[Claim]
get_claim_requirements:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self, trust_sources]
get_entity_names:
type: sync
module: src.verifact_agents.claim_detector
args: [self]
params:
returns: list[str]
get_evidence_requirements:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self]
get_output_requirements:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self]
get_prompt:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self, trust_sources]
get_search_tools:
type: sync
module: src.utils.search.search_tools
args: [tool_names]
params:
returns: list[Any]
get_summary:
type: sync
module: src.verifact_agents.claim_detector
args: [self]
params:
returns: str
get_tool_requirements:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self]
get_trust_sources:
type: sync
module: src.verifact_agents.evidence_hunter
args: [path]
get_websearch_tool:
type: sync
module: src.utils.search.search_tools
args: [user_location]
params:
returns: WebSearchTool
handle_message:
type: async
module: app
args: [message]
has_entities:
type: sync
module: src.verifact_agents.claim_detector
args: [self]
params:
returns: bool
is_checkworthy:
type: sync
module: src.verifact_agents.claim_detector
args: [self, threshold]
params:
returns: bool
is_high_confidence:
type: sync
module: src.verifact_agents.claim_detector
args: [self, threshold]
params:
returns: bool
on_chat_start:
type: async
module: app
process_claims:
type: async
module: src.verifact_agents.claim_detector
args: [text, min_checkworthiness]
params:
returns: list[Claim]
process_claims_tool:
type: tool
module: src.verifact_agents.claim_detector
args: [text, min_checkworthiness]
params:
returns: list[Claim]
progress_callback:
type: async
module: app
args: [msg, update]
query_formulation:
type: sync
module: src.verifact_agents.evidence_hunter
args: [self, claim]
run:
type: async
module: src.verifact_manager
args: [self, query, progress_callback, progress_msg]
params:
returns: None
serper_search:
type: tool
module: src.utils.search.search_tools
args: [query, num_results, search_type]
params:
returns: list[dict]
validate_claim_text:
type: sync
module: src.verifact_agents.claim_detector
args: [cls, v]
validate_context:
type: sync
module: src.verifact_agents.claim_detector
args: [cls, v]

variables:
SERPER_API_KEY:
type: env
params:
caller: [os.getenv]
path: [src.utils.search.search_tools]
USE_SERPER:
type: env
params:
caller: [os.getenv]
path: [src.utils.search.search_tools, src.verifact_agents.evidence_hunter]

files:
path:
type: variable
actions: [read]
params:
caller: [Path]