How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
# Warning: Pipeline type "summarization" is no longer supported in transformers v5.
# You must load the model directly (see below) or downgrade to v4.x with:
# 'pip install "transformers<5.0.0'
from transformers import pipeline

pipe = pipeline("summarization", model="AlgorithmicResearchGroup/led_base_16384_arxiv_summarization")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("AlgorithmicResearchGroup/led_base_16384_arxiv_summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("AlgorithmicResearchGroup/led_base_16384_arxiv_summarization")
Quick Links

Introduction

A led-base-16384 model to summarize ArXiv papers. Inputs are the abstracts of papers and full documents, and outputs are the summaries of the papers.

Allenai's Longformer Encoder-Decoder (LED).

As described in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan, led-base-16384 was initialized from bart-base since both models share the exact same architecture. To be able to process 16K tokens, bart-base's position embedding matrix was simply copied 16 times.

Rouge 2

Type Score
precision 0.1839148953011932
recall 0.14904707945189774
fmeasure 0.1580026685776864
Downloads last month
37
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for AlgorithmicResearchGroup/led_base_16384_arxiv_summarization

Evaluation results