Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About disagreement baseline #22

Open
Caixy1113 opened this issue Oct 8, 2024 · 0 comments
Open

About disagreement baseline #22

Caixy1113 opened this issue Oct 8, 2024 · 0 comments

Comments

@Caixy1113
Copy link

Hello, thank you for your outstanding work! I noticed in the disagreement algorithm paper that their sparse rewards differ from environmental rewards and do not use reinforcement learning for training. Instead, they directly optimize the agent using gradient backpropagation. Where is this specifically implemented in rlexplore?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant