Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When I use Prompt Generator, I got a error: Failed to generate rule config. Error: [openai] Error: 1 validation error for LLMResult\nmodel\n Input should be a valid string #12177

Open
5 tasks done
yanfzhang opened this issue Dec 28, 2024 · 2 comments
Labels
🐞 bug Something isn't working

Comments

@yanfzhang
Copy link

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.14.2

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

Workflow works fine but not in Prompt Generator

error response:
"Failed to generate rule config. Error: [openai] Error: 1 validation error for LLMResult\nmodel\n Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]\n For further information visit https://errors.pydantic.dev/2.9/v/string_type"

image

✔️ Expected Behavior

Works fine in the Prompt Generator

❌ Actual Behavior

Not work in Prompt Generator

@dosubot dosubot bot added the 🐞 bug Something isn't working label Dec 28, 2024
Copy link

dosubot bot commented Dec 28, 2024

@yanfzhang Hello there! I'm here to help you with any bugs, questions, or contributions you may have. Let's work together to resolve the issue you're experiencing.

The error "1 validation error for LLMResult: Input should be a valid string" in the Prompt Generator occurs because the parse method in the RuleConfigGeneratorOutputParser class expects the "prompt" and "opening_statement" fields in the input to be valid strings. If these fields are not strings, a ValueError is raised, which is then caught and re-raised as an OutputParserError.

To resolve this issue, ensure that the input text being parsed contains a valid JSON markdown with "prompt" and "opening_statement" fields as strings. Additionally, verify that the invoke_llm method in the LLMGenerator class returns a response where response.message.content is a string, as the code expects this to be the case when casting to str [1][2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@yanfzhang
Copy link
Author

My mistake. I encapsulated the OpenAPI interface, but the response did not include the parameter model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant