Skip to content

Error with some pretrained models: EOFError: Ran out of input when loading model #33

@PabloSczn

Description

@PabloSczn

Hello,

I'm encountering an issue when loading pretrained models from the OpenXAI repository. Specifically, when I attempt to generate explanations, I receive the following error with model files for various datasets with logistic regression:

Data: german, Model: lr
Traceback (most recent call last):
  File "...\generate_explanations.py", line 42, in <module>
    model = LoadModel(data_name, model_name, pretrained=pretrained)
  File "...\model.py", line 42, in LoadModel
    state_dict = torch.load(model_path+model_filename, map_location=torch.device('cpu'))
  File "...\torch\serialization.py", line 1114, in load
    return _legacy_load(
  File "...\torch\serialization.py", line 1338, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input

It appears that the .pt model file being loaded is either incomplete or corrupted.
I tried downloading the model file directly from the Dataverse link provided:

After downloading the .pt file from this link, I attempted to generate explanations again, but it failed once again

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions