[关闭]
@songying 2018-10-11T19:20:19.000000Z 字数 875 阅读 1327

Universal Language Model Fine-tuning for Text Classification

pre-trained-language-model


Abstract

我们提出了 Universal Language Model Fine-tuning(ULMFiT) ,并介绍了微调语言模型的各种技巧。
文档与代码: http://nlp.fast.ai/category/classification.html

数据集: IMDB情感分类数据集

Introduction

我们的贡献:
1. 提出了Universal Language Model Fine-tuning
2. We propose discriminative fine-tuning, slanted triangular learning rates, and gradual
unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning.
3. We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets.
4. We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis
5. We make the pretrained models and our code available to enable wider adoption.

  1. Transfer learning in CV
  2. Hypercolumns
  3. Fine-tuning
  4. Multi-task learning

Universal Language Model Fine-tuning

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注