You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello
Thanks for the work, I'm not good at programming so please let me know if this question is even necessary.
I have tried image captioning before and normally I can receive captions for any random image during testing, but how do I do the same for video captioning using your trained model? An example for this would help cause your dataset has no videos just the features and captions.
The text was updated successfully, but these errors were encountered:
Hi @Special256 , thanks for being interested in V2C. Unfortunately, I think the current implementation will not be able to extend to other datasets as it requires the pre-extraction of video features.
Hello
Thanks for the work, I'm not good at programming so please let me know if this question is even necessary.
I have tried image captioning before and normally I can receive captions for any random image during testing, but how do I do the same for video captioning using your trained model? An example for this would help cause your dataset has no videos just the features and captions.
The text was updated successfully, but these errors were encountered: