-
Notifications
You must be signed in to change notification settings - Fork 39
Issues: TinyLLaVA/TinyLLaVA_Factory
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Checkpoints using base method and LLaVA dataset
#11
opened Feb 29, 2024 by
G-JWLee
updated Mar 1, 2024
what the difference between v1 and v1.1?
#19
opened Mar 11, 2024 by
Eric-is-good
updated Mar 11, 2024
Use merge_lora_weights.py Error:AttributeError: 'TinyLlavaPhiConfig' object has no attribute 'attention_bias'
#24
opened Mar 15, 2024 by
mengjiexu
updated Mar 15, 2024
请问phi-2是像llava-phi那样预先在文本指令上微调过的吗,还是直接拿基座模型连接视觉编码器进行预训练呢?
#33
opened Mar 20, 2024 by
Yang-bug-star
updated Mar 20, 2024
Did you train the whole LLM in the pretraining stage of share recipe?
#34
opened Mar 20, 2024 by
Yang-bug-star
updated Mar 20, 2024
Is it possible to pretrain tinyllama-3b on 2 V100s ?
#37
opened Mar 22, 2024 by
Yang-bug-star
updated Mar 22, 2024
Able to merge 1.5B model, but unable to run eval
#50
opened Apr 17, 2024 by
tanveer-sayyed
updated Apr 19, 2024
Any plans on training a new model based on Phi-3?
#52
opened Apr 26, 2024 by
TheBobbyliu
updated Apr 28, 2024
Can you share the finetune.sh and pretrain.sh to train TinyLLaVA-1.5B?
#42
opened Apr 2, 2024 by
fyting
updated May 23, 2024
Is share_textvqa is the same as textvqa in llava dataset?
#65
opened May 24, 2024 by
cooleel
updated May 25, 2024
the loss is nan when pre-training tinyllama using share recipe
#13
opened Mar 4, 2024 by
xushilin1
updated May 29, 2024
What does "share" mean in the last line of the model performance table.
#69
opened May 29, 2024 by
lijiannuist
updated May 29, 2024
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.