@songying
2018-10-11T19:20:19.000000Z
字数 875
阅读 1327
pre-trained-language-model
我们提出了 Universal Language Model Fine-tuning(ULMFiT) ,并介绍了微调语言模型的各种技巧。
文档与代码: http://nlp.fast.ai/category/classification.html
数据集: IMDB情感分类数据集
我们的贡献:
1. 提出了Universal Language Model Fine-tuning
2. We propose discriminative fine-tuning, slanted triangular learning rates, and gradual
unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning.
3. We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets.
4. We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis
5. We make the pretrained models and our code available to enable wider adoption.