Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Releasing the model on torch.hub? #11

Open
gmberton opened this issue Oct 8, 2023 · 11 comments
Open

Releasing the model on torch.hub? #11

gmberton opened this issue Oct 8, 2023 · 11 comments
Assignees
Labels
documentation Improvements or additions to documentation torch.hub Model release on torch hub

Comments

@gmberton
Copy link
Contributor

gmberton commented Oct 8, 2023

Are there any plans to release the trained AnyLoc model on torch.hub? It is quite simple to do and allows people to use your model with two lines of code, allowing more people to use your model and helping to spread your work!
For example I did it for CosPlace, and the trained model can be automatically downloaded from anywhere without cloning the repo just like this

import torch
model = torch.hub.load("gmberton/cosplace", "get_trained_model", backbone="ResNet50", fc_output_dim=2048)

I'd be happy to help if needed :-)

@TheProjectsGuy
Copy link
Collaborator

TheProjectsGuy commented Oct 10, 2023

Hey @gmberton,
Thanks for taking an interest in our work and the suggestion.

We're restructuring the repository (as we make some updates), and we will surely keep note of this. Releasing the cluster centers for easy use is certainly helpful.
I think torch hub downloads the repository in ~/.cache/torch/hub/ and uses hubconf.py to do calls. Your code and the torch hub docs are very useful starting points. I guess creating a second repository with minimal code just for AnyLoc will be better (just like how the demos are isolated).

Note for #7 : Torch Hub release (probably separate repository).

@gmberton
Copy link
Contributor Author

Hi @TheProjectsGuy , do you have an estimate on when this will be ready?
It would be amazing if it could be done some time before the CVPR deadline, as for some papers from our group we'd like to run some experiments with AnyLoc (and including the whole AnyLoc codebase to our own codebases is quite inconvenient)

@TheProjectsGuy
Copy link
Collaborator

Hi @gmberton,
Sorry for the delay; we were caught up in revising our work.
We plan to put out an initial release for torch.hub by this coming weekend (29th Oct). I'll be working on this in the coming days.

@TheProjectsGuy
Copy link
Collaborator

TheProjectsGuy commented Oct 28, 2023

Hey @gmberton,

We've made the first beta release of torch.hub on a separate repository (to make the torch cache/downloads light): AnyLoc/DINO

Note: It's currently in beta (not fully developed yet), but we'll make sure the API's backward compatibility doesn't break. Also, only AnyLoc-VLAD-DINOv2 is available as of now, we'll get others soon.

Tutorial

import torch
model = torch.hub.load("AnyLoc/DINO", "get_vlad_model", 
        backbone="DINOv2", device="cuda")
# Images
img = torch.rand(1, 3, 224, 224)
# Result: VLAD descriptors of shape [1, 49152]
res = model(img)

It also supports batching

# Images
img = torch.rand(16, 3, 224, 224)
# Result: VLAD descriptors of shape [16, 49152]
res = model(img)

You can get more help from

print(torch.hub.list("AnyLoc/DINO"))
r = torch.hub.help("AnyLoc/DINO", "get_vlad_model")
print(r)

Also, please open issues about AnyLoc/DINO in this (current) repository since this is our main repository. We'll update the README with instructions after making more changes to the torch.hub release.

Let me know if you face any issues.

Edit: We will update here when the torch.hub release is stable, but in the meanwhile, you might want to use force_reload = True when loading or doing anything torch.hub related.

@TheProjectsGuy TheProjectsGuy added the torch.hub Model release on torch hub label Oct 28, 2023
@TheProjectsGuy TheProjectsGuy self-assigned this Oct 28, 2023
@gmberton
Copy link
Contributor Author

This is super useful, thank you @TheProjectsGuy!
Just one question: is it normal that inference with DINOv2 is very slow? I tried it on different machines and it runs about 100x slower than models based on a ResNet50.

@TheProjectsGuy
Copy link
Collaborator

Are you getting slower speeds on AnyLoc-VLAD-DINOv2 while using batching on torch.hub, or are you talking about the DINOv2 (CLS)?

@gmberton
Copy link
Contributor Author

gmberton commented Nov 1, 2023

Sorry for the lack of clarity in my previous message: I'm getting slower speed on AnyLoc-VLAD-DINOv2.
I wrote a little script to test the speed

import time
import torch
import torchvision

def timer(model, images, loop=100):
    images = images.cuda()
    _ = model(images)
    start_time = time.time()
    with torch.inference_mode():
        for i in range(loop):
            _ = model(images)
    end_time = time.time()
    print(f"It took {end_time - start_time} seconds")

anyloc = torch.hub.load("AnyLoc/DINO", "get_vlad_model", backbone="DINOv2", device="cuda")
resnet50 = torchvision.models.resnet50().eval().cuda()

images = torch.rand(4, 3, 224, 224)

timer(resnet50, images)
timer(anyloc, images)

If I run this I get that the ResNet50 takes roughly 0.2 seconds, while AnyLoc about 10 seconds.
I'm just wondering if this is expected or if there is some issue on my side.

@gmberton
Copy link
Contributor Author

Hi, are there any news on this?
By the way, I've added AnyLoc to this VPR benchmarking repo, if you find any problem with the AnyLoc in that repo feel free to update it with a PR :-)

@gmberton
Copy link
Contributor Author

Hi, you might find useful to know that I tried the DINOv2-based SALAD and the speed is fine (only 4x slower than a ResNet50), so there might be some problem within AnyLoc's code. Anyway, results are good, it's only a speed issue

@TheProjectsGuy
Copy link
Collaborator

Hey @gmberton,
Thanks for pointing out the speed issue. We're looking into it, alongside some repository restructuring. I'll update this issue after delivering a patch on the torch.hub pipeline soon.

@hmf21
Copy link

hmf21 commented Apr 13, 2024

Hi, you might find useful to know that I tried the DINOv2-based SALAD and the speed is fine (only 4x slower than a ResNet50), so there might be some problem within AnyLoc's code. Anyway, results are good, it's only a speed issue

Hey @gmberton,
It might because that the SALAD is trianed on the ViT-B model. Well the AnyLoc in this repo is based on the ViT-G model, which has about x10 parameters.

@TheProjectsGuy TheProjectsGuy added the documentation Improvements or additions to documentation label Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation torch.hub Model release on torch hub
Projects
None yet
Development

No branches or pull requests

3 participants