Instructions to use VMware/roberta-large-mrqa with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use VMware/roberta-large-mrqa with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="VMware/roberta-large-mrqa")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("VMware/roberta-large-mrqa") model = AutoModelForQuestionAnswering.from_pretrained("VMware/roberta-large-mrqa") - Notebooks
- Google Colab
- Kaggle
File size: 381 Bytes
d1bb615 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | {
"add_prefix_space": false,
"bos_token": "<s>",
"cls_token": "<s>",
"eos_token": "</s>",
"errors": "replace",
"mask_token": "<mask>",
"model_max_length": 512,
"name_or_path": "roberta-large",
"pad_token": "<pad>",
"sep_token": "</s>",
"special_tokens_map_file": null,
"tokenizer_class": "RobertaTokenizer",
"trim_offsets": true,
"unk_token": "<unk>"
}
|