-
Notifications
You must be signed in to change notification settings - Fork 6
Adds social context RL models #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds social context RL models #17
Conversation
Enhanced the plot_recovery method to dynamically scale figure size and layout based on the number of parameters, ensuring square subplots and compact spacing. Updated plotting utilities to use constrained layout, improved marker sizing, annotation, and axis aspect handling for clearer and more consistent visualizations.
Updated the reinforcement learning example notebook to include a clearer titles. Reorganized introductory content for improved clarity.
rewards -> "outcomes" fixed reward probabilities for self/other made it more human readable counterbalancing and fixed option pairs for simulate function made sure rewards are not overwriting made sure nblock number does not have to be a multiple of 6, 12, etc
adapted function call for 4a model from rewards --> "outcomes"
This reverts commit 13efef8.
Updated the default value of the njobs parameter from -1 to -2 in both EMModel and EMConfig classes
Replaces the deprecated set_constrained_layout_pads() with get_layout_engine().set() for adjusting subplot paddings, ensuring compatibility with newer matplotlib versions.
Refactored the bayes model API by renaming the 'simulate' and 'fit' functions to 'bayes_sim' and 'bayes_fit' respectively.
Renamed functions in pyem.models.glm.py for improved clarity and consistency: 'simulate' to 'glm_sim', 'fit' to 'glm_fit', 'simulate_decay' to 'glm_decay_sim', and 'fit_decay' to 'glm_decay_fit'.
Replaces deprecated or renamed simulation and fitting function imports in test files to match updated function names (e.g., rw1a1b_sim, rw2a1b_sim, bayes_sim, glm_sim, glm_decay_sim, etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds two new social context reinforcement learning models (1q3a1b and 1q4a1b) to rl.py, provides example implementations in the rl.ipynb notebook, and modifies the plot_recovery() function in api.py to use a 3-column grid layout with improved spacing.
Key changes:
- Added rw3a1b_sim/fit and rw4a1b_sim/fit functions for social RL models
- Renamed existing simulation functions from *_simulate to *_sim for consistency
- Updated plot_recovery() to use 3-column grid layout with constrained_layout
Reviewed Changes
Copilot reviewed 12 out of 13 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| pyem/models/rl.py | Added two new social RL models (3α-1β and 4α-1β) and renamed simulation functions for consistency |
| pyem/api.py | Modified plot_recovery() to use 3-column grid layout with improved spacing |
| pyem/utils/plotting.py | Updated plot_scatter() with smaller default figure size and improved aspect ratio handling |
| pyem/core/em.py | Changed default njobs from -1 to -2 |
| pyem/models/bayes.py | Renamed simulate/fit functions to bayes_sim/bayes_fit and removed unused import |
| tests/*.py | Updated function names to match new *_sim naming convention |
| examples/glm.ipynb | Updated figure size metadata to reflect new default |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
added two new social reinforcement learning models (1q3a1b and 1q4a1b) to rl.py
added model example implementation in the rl.ipynb notebook
modified plot_recovery() function in api.py