@songying
2019-03-19T12:06:55.000000Z
字数 2053
阅读 1072
阅读理解-蛮荒时代
本文是阅读理解的一篇综述性论文。
我们将现有的模型分为aggregation readers 和 explicit reference readers两种。
In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on Who-did-What datasets
cloze-style语料库: CNN&Daily, the Children's Bokk Test, Who did What dataset。
我们将目前的模型分为两类:aggregation readers 和 explicit reference readers。
包括:Memory networks, the Attentive Reader, the Standford Reader, 使用双向LSTMs或 GRUs 来 construct a contextual embedding of each position t in the passage and also an embedding q of the question. 然后 select and answer c using a criterion similar to
Aggregation readers compute a vector representation of the passage involving a question-sensitive attention. They then select an answer based on the passage vector.
Explicit reference readers
包括: the Attention Sum Reader, the Gated Attention Reader, the Attention-over-Attention Reader
- q: a question given as sequence of words containing a special taken for a “blank” to be filled in
- p: a document consisting of a sequence of words
- A: a set of possible answers
- a: , the ground truth answer.
问题可以描述为挑选答案 , 其中, a是基于p对q的回答。
这里我们将所有的readers分为: aggregation readers 和 explicit reference readers。
the Stanford Reader 使用一个双向lstm来计算passage和question。
论文:《Text understanding with the attention sum reader network》:为了解决cloze-style 问题。数据集: CNN&Daily Mail, the Children's Book Test.
论文: 《Gated-Attention Readers for Text Comprehension》
论文: 《Attention-over-attention neural networks for reading comprehension》