Draft
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #153 +/- ##
========================================
- Coverage 7.93% 5.18% -2.75%
========================================
Files 124 104 -20
Lines 17502 16035 -1467
Branches 382 169 -213
========================================
- Hits 1388 832 -556
+ Misses 16103 15203 -900
+ Partials 11 0 -11
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
shayansadeghieh
commented
Aug 29, 2025
Comment on lines
-189
to
-195
| def add_hook_in_to_mlp(mlp): # type: ignore | ||
| mlp.hook_in = HookPoint() | ||
| original_forward = mlp.forward | ||
| mlp.forward = lambda x: original_forward(mlp.hook_in(x)) | ||
|
|
||
| for block in model.blocks: | ||
| add_hook_in_to_mlp(block.mlp) |
Contributor
Author
There was a problem hiding this comment.
@hijohnnylin I don't think we need this anymore with tlens >= 3, please correct me if I'm wrong. It is causing infinite loops with model_bridge.
Try running this in a notebook and you'll see the new version of tlens automatically has mlp_block.hook_in attribute so we shouldn't need to add it:
!pip install git+https://github.com/shayansadeghieh/[email protected]
from transformer_lens.model_bridge import TransformerBridge
from transformer_lens import HookedTransformer
print("\n=== NEW: TransformerBridge ===")
new_model = TransformerBridge.boot_transformers("gpt2-small")
new_model.enable_compatibility_mode(disable_warnings=True)
# Test if hook_in exists
mlp_block = new_model.blocks[0].mlp
print(f"MLP has hook_in: {hasattr(mlp_block, 'hook_in')}")
print(f"MLP has hook_out: {hasattr(mlp_block, 'hook_out')}")
# Try to access it
try:
hook = mlp_block.hook_in
print(f"hook_in type: {type(hook)}")
print(f"hook_in name: {hook.name}")
except AttributeError as e:
print(f"ERROR: {e}")
print("=== OLD: HookedTransformer ===")
old_model = HookedTransformer.from_pretrained("gpt2-small")
# Test if hook_in exists
mlp_block = old_model.blocks[0].mlp
print(f"MLP has hook_in: {hasattr(mlp_block, 'hook_in')}")
print(f"MLP has hook_out: {hasattr(mlp_block, 'hook_out')}")
# Try to access it
try:
hook = mlp_block.hook_in
print(f"hook_in type: {type(hook)}")
except AttributeError as e:
print(f"ERROR: {e}")
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
Brief description of problem being resolved. If there are multiple or subpoints, use bullet points.
Fix
Brief description of fix being applied. If there are multiple or subpoints, use bullet points.
Testing
Brief description of testing that is done or added. If there are multiple or subpoints, use bullet points.