@songying
2018-08-21T15:48:09.000000Z
字数 800
阅读 1385
squad-model
本文介绍 the Reinforced Mnemonic Reader, 该Reader 对以前 attentive readers做了两方面的加强:
1. a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency.
2. a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method.
目前的局限:
1. Tocapture complexinteractionsbetween thecontextand the question, a variety of neural attention, such as bi-attention, are proposed in a single-round alignment architecture. In order to fully compose complete information of the inputs, multiround alignment architectures that compute attentions repeatedly have been proposed。 然而, 在这些方法中,???
2.