Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Strategy : Automated testing #127

Open
planetf1 opened this issue Jan 22, 2025 · 1 comment
Open

Test Strategy : Automated testing #127

planetf1 opened this issue Jan 22, 2025 · 1 comment

Comments

@planetf1
Copy link
Collaborator

We need to consider how we test beehive

  • locally
  • automatically as part of PR/CI process
  • What framework/approach do we want to take with fairly standalone python code? Pytest? Coordinated by poetry?
  • How much do we focus on isolated testing (stubbing out LLM for example) vs integrating with LLM?
  • If we need LLM, what/where/configuration?
  • How do we manage PR/CI verification - generally easy with github actions. Could run a small LLM within a github runner. How do bee work? Can we use same backend for llm? Needs secrets
  • How do we check results from tests. With regular code this is easy - assertion of output. However with LLMs involved, so we need another LLM to interpret the results?

This was brought up in issue #125

Originally posted by @planetf1 in #125 (comment)

@psschwei
Copy link
Collaborator

For me, the simple first step would be to be make sure the weather example still works (i.e. run that locally after you make your code changes).

As far as testing framework, I think we should sync with @vabarbosa to use the same thing that the bee-py framework uses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants