Skip to content

Improving Language Understanding by Generative Pre-Training #16

@koptimizer

Description

@koptimizer

📋 논문의 정보를 알려주세요.

  • Improving Language Understanding by Generative Pre-Training
  • Alec Radford et al.
  • UBC.ca
  • 2018

📃 Abstract(본문)

Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).

🔎 어떤 논문인지 소개해주세요.

  • BERT와 함께 NLP계의 양대산맥으로 군림한 GPT에 대한 논문입니다.

🔑 핵심 키워드를 적어주세요.

  • GPT, Pre-Training, Transformer Decorder, Fine-tunning

📎 URL

Metadata

Metadata

Assignees

No one assigned

    Labels

    DLDeep LearningMLMachine LearningNLPNatural Language Process

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions