Skip to content

Conversation

AzeezIsh
Copy link
Contributor

Enhances the context and quality of LLM requests by improving how KnowledgeDB responses are incorporated into prompts. The main changes focus on extracting and formatting more comprehensive information from KnowledgeDB examples, ensuring that all relevant non-empty fields are included in both general and code-specific LLM queries. The dependency on aali-sharedtypes has been updated accordingly.

Improvements to LLM prompt construction:

  • The BuildFinalQueryForGeneralLLMRequest function now appends all non-empty fields from each KnowledgeDB example, including names, types, parent classes, document names, summaries, dependencies, keywords, and tags, to provide maximum context in the generated prompt.
  • The BuildFinalQueryForCodeLLMRequest function constructs both summary and code sections using all available non-empty fields from each example, resulting in richer and more informative code generation prompts.

@AzeezIsh AzeezIsh requested a review from a team as a code owner September 19, 2025 16:47
@AzeezIsh AzeezIsh linked an issue Sep 23, 2025 that may be closed by this pull request
@rcrisanti rcrisanti merged commit b0cdb0b into main Sep 24, 2025
8 checks passed
@rcrisanti rcrisanti deleted the knowledgedb_updates branch September 24, 2025 15:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix prompt formation depending on DbResponse contents
2 participants