Skip to content

[bug]: hallucinations despite grounding responses with external knowledge sources #37

@grittypuffy

Description

@grittypuffy

CivicSphere uses Azure OpenAI with Grounding with Bing Search for Azure AI Agents for prototype for ensuring verifiable information.

However, there is a possibility of responding with a factually incorrect information. As in, when prompted for "Who is the president of USA?" it responds Joe Biden. This is an issue with latest GPT-5o-mini models, which impacts accuracy.

Here's the response provided by ChatGPT with GPT-5o-mini for the same prompt:

As of 2025, the President of the United States is **Joe Biden**. He was inaugurated for his second term on January 20, 2025.

This would require inspection and experimentation with other LLM models and grounding the agent further by procurement of information from external sources for commonly known information.

Challenges

  • Ensuring up-to-date information for the external knowledge base to ensure relevance
  • Challenges with data procurement and engineering

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingexternalExternal dependency related issue

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions