Skip to content

Issues: huggingface/transformers

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Reload Transformers imports Feature request Request for a new feature
#35508 opened Jan 4, 2025 by KareemMusleh
qwen2 rope device matching bug bug
#35505 opened Jan 4, 2025 by developer0hye
2 of 4 tasks
Links in release note are broken bug
#35480 opened Jan 2, 2025 by oraluben
Unknown quantization type, got fp8 bug
#35471 opened Dec 31, 2024 by ruidazeng
2 of 4 tasks
Support SDPA & Flash Attention 2 for LayoutLMv3 Feature request Request for a new feature
#35467 opened Dec 31, 2024 by stancld
8bits GPTQ quantization output bug
#35460 opened Dec 30, 2024 by joshuaongg21
1 of 4 tasks
Support Constant Learning Rate with Cooldown Feature request Request for a new feature
#35449 opened Dec 29, 2024 by LoserCheems
Tokenizer does not split text according to newly added input tokens bug Core: Tokenization Internals of the library; Tokenization.
#35447 opened Dec 29, 2024 by jiongjiongli
2 of 4 tasks
tokenizer should be replaced to processing_class in Seq2SeqTrainer? bug Core: Tokenization Internals of the library; Tokenization. trainer
#35446 opened Dec 29, 2024 by zzaebok
2 of 4 tasks
ProTip! Exclude everything labeled bug with -label:bug.