[1910.13461] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Global Survey
In just 3 minutes, help us better understand how you perceive arXiv.
Take the survey
TAKE SURVEY
Skip to main content
We gratefully acknowledge support from
the Simons Foundation and member institutions.
arXiv.org > cs > arXiv:1910.13461
Help | Advanced Search
All fields
Title
Author
Abstract
Comments
Journal reference
ACM classification
MSC classification
Report number
arXiv identifier
DOI
ORCID
arXiv author ID
Help pages
Full text
Search
GO
quick links
Login
Help Pages
About
Computer Science > Computation and Language
arXiv:1910.13461 (cs)
[Submitted on 29 Oct 2019]
Title:BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Authors:Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer
Download PDF
Abstract: We present BART, a denoising autoencoder for pretraining sequence-to-sequence
models. BART is trained by (1) corrupting text with an arbitrary noising
function, and (2) learning a model to reconstruct the original text. It uses a
standard Tranformer-based neural machine translation architecture which,
despite its simplicity, can be seen as generalizing BERT (due to the
bidirectional encoder), GPT (with the left-to-right decoder), and many other
more recent pretraining schemes. We evaluate a number of noising approaches,
finding the best performance by both randomly shuffling the order of the
original sentences and using a novel in-filling scheme, where spans of text are
replaced with a single mask token. BART is particularly effective when fine
tuned for text generation but also works well for comprehension tasks. It
matches the performance of RoBERTa with comparable training resources on GLUE
and SQuAD, achieves new state-of-the-art results on a range of abstractive
dialogue, question answering, and summarization tasks, with gains of up to 6
ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system
for machine translation, with only target language pretraining. We also report
ablation experiments that replicate other pretraining schemes within the BART
framework, to better measure which factors most influence end-task performance.
Subjects:
Computation and Language (cs.CL); Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as: arXiv:1910.13461 [cs.CL]
(or
arXiv:1910.13461v1 [cs.CL] for this version)
Submission history
From: Marjan Ghazvininejad [view email]
[v1]
Tue, 29 Oct 2019 18:01:00 UTC (143 KB)
Full-text links:
Download:
PDF
Other formats
(license)
Current browse context: cs.CL
< prev | next >
new
|
recent
|
1910
Change to browse by:
cs
cs.LG
stat
stat.ML
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
5 blog links
(what is this?)
DBLP – CS Bibliography
listing | bibtex
Mike Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdelrahman Mohamed
…
a
export bibtex citation
Loading…
Bibtex formatted citation
×
loading…
Data provided by:
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code & Data
Code and Data Associated with this Article
arXiv Links to Code Toggle
arXiv Links to Code & Data (What is Links to Code & Data?)
Related Papers
Recommenders and Search Tools
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv’s community? Learn more about arXivLabs and how to get involved.
Which authors of this paper are endorsers? |
Disable MathJax (What is MathJax?)
About
Help
Click here to contact arXiv
Contact
Click here to subscribe
Subscribe
Copyright
Privacy Policy
Web Accessibility Assistance
arXiv Operational Status
Get status notifications via
email
or slack