Update dependency transformers to v4.41.2 #13
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==4.37.2
->==4.41.2
Release Notes
huggingface/transformers (transformers)
v4.41.2
Compare Source
Release v4.41.2
Mostly fixing some stuff related to
trust_remote_code=True
andfrom_pretrained
The
local_file_only
was having a hard time when a.safetensors
file did not exist. This is not expected and instead of trying to convert, we should just fallback to loading the.bin
files.v4.41.1
: Fix PaliGemma finetuning, and some small bugsCompare Source
Release v4.41.1
Fix PaliGemma finetuning:
The causal mask and label creation was causing label leaks when training. Kudos to @probicheaux for finding and reporting!
Other fixes:
Reverted huggingface/transformers@4ab7a28
v4.41.0
: : Phi3, JetMoE, PaliGemma, VideoLlava, Falcon2, FalconVLM & GGUF supportCompare Source
New models
Phi3
The Phi-3 model was proposed in Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone by Microsoft.
TLDR; Phi-3 introduces new ROPE scaling methods, which seems to scale fairly well! A 3b and a
Phi-3-mini is available in two context-length variants—4K and 128K tokens. It is the first model in its class to support a context window of up to 128K tokens, with little impact on quality.
JetMoE
JetMoe-8B is an 8B Mixture-of-Experts (MoE) language model developed by Yikang Shen and MyShell. JetMoe project aims to provide a LLaMA2-level performance and efficient language model with a limited budget. To achieve this goal, JetMoe uses a sparsely activated architecture inspired by the ModuleFormer. Each JetMoe block consists of two MoE layers: Mixture of Attention Heads and Mixture of MLP Experts. Given the input tokens, it activates a subset of its experts to process them. This sparse activation schema enables JetMoe to achieve much better training throughput than similar size dense models. The training throughput of JetMoe-8B is around 100B tokens per day on a cluster of 96 H100 GPUs with a straightforward 3-way pipeline parallelism strategy.
PaliGemma
PaliGemma is a lightweight open vision-language model (VLM) inspired by PaLI-3, and based on open components like the SigLIP vision model and the Gemma language model. PaliGemma takes both images and text as inputs and can answer questions about images with detail and context, meaning that PaliGemma can perform deeper analysis of images and provide useful insights, such as captioning for images and short videos, object detection, and reading text embedded within images.
More than 120 checkpoints are released see the collection here !
VideoLlava
Video-LLaVA exhibits remarkable interactive capabilities between images and videos, despite the absence of image-video pairs in the dataset.
💡 Simple baseline, learning united visual representation by alignment before projection
With the binding of unified visual representations to the language feature space, we enable an LLM to perform visual reasoning capabilities on both images and videos simultaneously.
🔥 High performance, complementary learning with video and image
Extensive experiments demonstrate the complementarity of modalities, showcasing significant superiority when compared to models specifically designed for either images or videos.
Falcon 2 and FalconVLM:
Two new models from TII-UAE! They published a blog-post with more details! Falcon2 introduces parallel mlp, and falcon VLM uses the
Llava
frameworkGGUF
from_pretrained
supportYou can now load most of the GGUF quants directly with transformers'
from_pretrained
to convert it to a classic pytorch model. The API is simple:We plan more closer integrations with llama.cpp / GGML ecosystem in the future, see: https://github.com/huggingface/transformers/issues/27712 for more details
Quantization
New quant methods
In this release we support new quantization methods: HQQ & EETQ contributed by the community. Read more about how to quantize any transformers model using HQQ & EETQ in the dedicated documentation section
dequantize
API for bitsandbytes modelsIn case you want to dequantize models that have been loaded with bitsandbytes, this is now possible through the
dequantize
API (e.g. to merge adapter weights)dequantize
API for bitsandbytes quantized models by @younesbelkada in https://github.com/huggingface/transformers/pull/30806API-wise, you can achieve that with the following:
Generation updates
min_p
sampling by @gante in https://github.com/huggingface/transformers/pull/30639Gemma
work withtorch.compile
by @ydshieh in https://github.com/huggingface/transformers/pull/30775SDPA support
BERT
] Add support for sdpa by @hackyon in https://github.com/huggingface/transformers/pull/28802Improved Object Detection
Addition of fine-tuning script for object detection models
Interpolation of embeddings for vision models
Add interpolation of embeddings. This enables predictions from pretrained models on input images of sizes different than those the model was originally trained on. Simply pass
interpolate_pos_embedding=True
when calling the model.Added for: BLIP, BLIP 2, InstructBLIP, SigLIP, ViViT
🚨 might be breaking
evaluation_strategy
toeval_strategy
🚨🚨🚨 by @muellerzr in https://github.com/huggingface/transformers/pull/30190Cleanups
Not breaking but important for Llama tokenizers
LlamaTokenizerFast
] Refactor default llama by @ArthurZucker in https://github.com/huggingface/transformers/pull/28881Fixes
prev_ci_results
by @ydshieh in https://github.com/huggingface/transformers/pull/30313pad token id
in pipeline forward arguments by @zucchini-nlp in https://github.com/huggingface/transformers/pull/30285jnp
import inutils/generic.py
by @ydshieh in https://github.com/huggingface/transformers/pull/30322AssertionError
in clip conversion script by @ydshieh in https://github.com/huggingface/transformers/pull/30321pad_token_id
again by @zucchini-nlp in https://github.com/huggingface/transformers/pull/30338Llama
family, fixuse_cache=False
generation by @ArthurZucker in https://github.com/huggingface/transformers/pull/30380-rs
to show skip reasons by @ArthurZucker in https://github.com/huggingface/transformers/pull/30318require_torch_sdpa
for test that needs sdpa support by @faaany in https://github.com/huggingface/transformers/pull/30408LlamaTokenizerFast
] Refactor default llama by @ArthurZucker in https://github.com/huggingface/transformers/pull/28881Llava
] + CIs fix red cis and llava integration tests by @ArthurZucker in https://github.com/huggingface/transformers/pull/30440paths
filter to avoid the chance of being triggered by @ydshieh in https://github.com/huggingface/transformers/pull/30453utils/check_if_new_model_added.py
by @ydshieh in https://github.com/huggingface/transformers/pull/30456research_project
] Most of the security issues come from this requirement.txt by @ArthurZucker in https://github.com/huggingface/transformers/pull/29977WandbCallback
with third parties by @tomaarsen in https://github.com/huggingface/transformers/pull/30477SourceFileLoader.load_module()
in dynamic module loading by @XuehaiPan in https://github.com/huggingface/transformers/pull/30370HfQuantizer
quant method update by @younesbelkada in https://github.com/huggingface/transformers/pull/30484bitsandbytes
error formatting ("Some modules are dispatched on ...") by @kyo-takano in https://github.com/huggingface/transformers/pull/30494dtype_byte_size
to handle torch.float8_e4m3fn/float8_e5m2 types by @mgoin in https://github.com/huggingface/transformers/pull/30488DETR
] Remove timm hardcoded logic in modeling files by @amyeroberts in https://github.com/huggingface/transformers/pull/29038_load_best_model
by @muellerzr in https://github.com/huggingface/transformers/pull/30553use_cache
in kwargs for GPTNeoX by @zucchini-nlp in https://github.com/huggingface/transformers/pull/30538use_square_size
after loading by @ydshieh in https://github.com/huggingface/transformers/pull/30567output_router_logits
in SwitchTransformers by @lausannel in https://github.com/huggingface/transformers/pull/30573contiguous()
in clip checkpoint conversion script by @ydshieh in https://github.com/huggingface/transformers/pull/30613generate
-related rendering issues by @gante in https://github.com/huggingface/transformers/pull/30600StoppingCriteria
autodocs by @gante in https://github.com/huggingface/transformers/pull/30617SinkCache
on Llama models by @gante in https://github.com/huggingface/transformers/pull/30581None
as attention when layer is skipped by @jonghwanhyeon in https://github.com/huggingface/transformers/pull/30597TextGenerationPipeline._sanitize_parameters
from overriding previously provided parameters by @yting27 in https://github.com/huggingface/transformers/pull/30362CI update
] Try to use dockers and no cache by @ArthurZucker in https://github.com/huggingface/transformers/pull/29202resume_download
deprecation by @Wauplin in https://github.com/huggingface/transformers/pull/30620cache_position
initialisation for generation withuse_cache=False
by @nurlanov-zh in https://github.com/huggingface/transformers/pull/30485forward
inIdefics2ForConditionalGeneration
with correctignore_index
value by @zafstojano in https://github.com/huggingface/transformers/pull/30678workflow_id
inutils/get_previous_daily_ci.py
by @ydshieh in https://github.com/huggingface/transformers/pull/30695prev_ci_results
toci_results
by @ydshieh in https://github.com/huggingface/transformers/pull/30697model.active_adapters()
instead of deprecatedmodel.active_adapter
whenever possible by @younesbelkada in https://github.com/huggingface/transformers/pull/30738Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.