Instructions to use MCG-NJU/videomae-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MCG-NJU/videomae-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("video-classification", model="MCG-NJU/videomae-base")# Load model directly from transformers import AutoImageProcessor, AutoModelForPreTraining processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base") model = AutoModelForPreTraining.from_pretrained("MCG-NJU/videomae-base") - Notebooks
- Google Colab
- Kaggle
VideoMAEV2
#2
by tljstewart - opened
Is this VideoMAEv2? is this supported on huggingface?
Hi,
No this is VideoMAE v1. Does V2 have the same architecture?
Hi Nielsr, thanks for the reply
I believe its the same except VideoMAEv2 has additional decoder masking compared to VideoMAE, I am looking into this is any more information you might need or any ideas on what I should look for? I am going to attempt to print the model.summary() and see what I get?