We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, thank you for your great work. I am little bit confused about the checkpoint that post on the repository. I saw the paper at "Pre-training Details" section, pretrianed dataset is 14M including COCO, Flickr.... which match the checkpoint with the link https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_14M.pth right?
Also did model_base_14M and model_base (https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth) all use CapFilt?
Thank you for your help
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi,
thank you for your great work. I am little bit confused about the checkpoint that post on the repository. I saw the paper at "Pre-training Details" section, pretrianed dataset is 14M including COCO, Flickr.... which match the checkpoint with the link https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_14M.pth right?
Also did model_base_14M and model_base (https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth) all use CapFilt?
Thank you for your help
The text was updated successfully, but these errors were encountered: