Instructions to use aware-ai/mobilebert-squadv2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aware-ai/mobilebert-squadv2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="aware-ai/mobilebert-squadv2")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("aware-ai/mobilebert-squadv2") model = AutoModelForQuestionAnswering.from_pretrained("aware-ai/mobilebert-squadv2") - Notebooks
- Google Colab
- Kaggle
| datasets: | |
| - squad_v2 | |
| language: | |
| - en | |
| library_name: transformers | |
| pipeline_tag: question-answering | |
| # Mobile-Bert fine-tuned on Squad V2 dataset | |
| This is based on mobile bert architecture suitable for handy devices or device with low resources. | |
| ## usage | |
| using transformers library first load model and Tokenizer | |
| ``` | |
| from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline | |
| model_name = "aware-ai/mobilebert-squadv2" | |
| model = AutoModelForQuestionAnswering.from_pretrained(model_name) | |
| tokenizer = AutoTokenizer.from_pretrained(model_name) | |
| ``` | |
| use question answering pipeline | |
| ``` | |
| qa_engine = pipeline('question-answering', model=model, tokenizer=tokenizer) | |
| QA_input = { | |
| 'question': 'your question?', | |
| 'context': '. your context ................ ' | |
| } | |
| res = qa_engine (QA_input) | |
| ``` |