Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Table question answering pipeline failing to save #32128

Closed
2 of 4 tasks
daniellok-db opened this issue Jul 22, 2024 · 3 comments · Fixed by #32149
Closed
2 of 4 tasks

Table question answering pipeline failing to save #32128

daniellok-db opened this issue Jul 22, 2024 · 3 comments · Fixed by #32149
Labels

Comments

@daniellok-db
Copy link
Contributor

daniellok-db commented Jul 22, 2024

System Info

  • transformers version: 4.43.0.dev0 (installed from source)
  • Platform: macOS-14.4.1-arm64-arm-64bit
  • Python version: 3.9.13
  • Huggingface_hub version: 0.23.5
  • Safetensors version: 0.4.2
  • Accelerate version: 0.25.0
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.1.1 (False)
  • Tensorflow version (GPU?): 2.16.1 (False)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using distributed or parallel set-up in script?:

Who can help?

@muellerzr @amyeroberts

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Run the following script (just saving and loading a pipeline)

import transformers

pipe = transformers.pipeline(
    task="table-question-answering", model="google/tapas-tiny-finetuned-wtq"
)

pipe.save_pretrained("test")

this gives the exception:

ValueError: You are trying to save a non contiguous tensor: `tapas.encoder.layer.0.attention.self.query.weight` which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call `.contiguous()` on your tensor to pack it before saving.

I'm not sure exactly what's going on, but when reverting this commit, it saves successfully.

Is there any way to disable superfast init as a workaround?

Expected behavior

Saving a pretrained pipeline should not fail with an exception.

@daniellok-db
Copy link
Contributor Author

I got around the issue eventually by loading the model separately from the pipeline and specifying low_cpu_mem_usage=True, but I'm still curious what the problem is

@amyeroberts
Copy link
Collaborator

amyeroberts commented Jul 22, 2024

@daniellok-db Could you try adding _supports_param_buffer_assignment = False to the pretrained model class? If this works, would you like to open a PR to add this? This way you get the github contribution

cc @muellerzr Could you look into why this wasn't caught for this model? We'll want to make sure all the models are load/save compatible before the next release

@daniellok-db
Copy link
Contributor Author

daniellok-db commented Jul 23, 2024

@amyeroberts thanks! made the PR and manually verified that saving works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants