-
Notifications
You must be signed in to change notification settings - Fork 79
Open
Description
Hello,
when running the ITSEdit watermarking algorithm on my model, I consistently hit this error during generation:
{"task_id": "HumanEval/0", "completion": "ERROR: The size of tensor a (176) must match the size of tensor b (177) at non-singleton dimension 3"}
{"task_id": "HumanEval/1", "completion": "ERROR: The size of tensor a (166) must match the size of tensor b (167) at non-singleton dimension 3"}
...
I've verified that:
- My
vocab_sizematches the embedding size of my LLM (viamodel.get_input_embeddings().weight.shape[0]), - I removed all custom
attention_maskhandling and rely onuse_cache=True
But the mismatch persists (n vs. n+1), and I can’t pinpoint where an extra dimension is creeping in.
My environment
- Python: 3.11.11
- PyTorch: 2.2.0+cu121
- Transformers: 4.51.3
- Model: Gemma-3
Metadata
Metadata
Assignees
Labels
No labels