[1]
|
Schank, R.C. and Abelson, R.P. (1978) Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures. Language, 54, 779. https://doi.org/10.2307/412850
|
[2]
|
Berant, J., et al. (2013) Semantic Parsing on Freebase from Question-Answer Pairs. Proceedings of the 2013 Conference on EMNLP, Washington DC, July 2013, 1533-1544.
|
[3]
|
Hermann, K.M., et al. (2015) Teaching Machines to Read and Comprehend.
|
[4]
|
Wg, L. (1977) The Process of Question and Answering. PhD Thesis, Yale University, New Haven.
|
[5]
|
Hirschman, L., et al. (1999) Deep Read: A Reading Comprehension System. Proceedings of the 37th Conference on ACL, Maryland, June 1999, 325-332. https://doi.org/10.3115/1034678.1034731
|
[6]
|
Richardson, et al. (2013) MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Tex. In: Proceedings of the 2013 Conference on Empirical Methods in Natu-ral Language Processing, Association for Computational Linguistics, Stroudsburg, 193-203.
|
[7]
|
Narasimhan, K. and Barzilay, R. (2015) Machine Comprehension with Discourse Relations. Meeting of the Association for Computational Linguistics & the International Joint Conference on Natural Language Processing, Volume 1, 1253-1262. https://doi.org/10.3115/v1/P15-1121
|
[8]
|
Sachan, M., et al. (2015) Learning Answer-Entailing Structures for Ma-chine Comprehension. Meeting of the Association for Computational Linguistics & the International Joint Conference on Natural Language Processing, Volume 1, 239-249. https://doi.org/10.3115/v1/P15-1024
|
[9]
|
Wang, H., et al. (2015) Machine Comprehension with Syntax, Frames, and Semantics. Proceedings of the IJCNLP, Beijing, July 2015, 700-706.
|
[10]
|
Rajpurkar, P., et al. (2016) SQuAD: 100,000+ Questions for Machine Comprehension of Text. Proceed-ings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, November 2016, 2383-2392.
https://doi.org/10.18653/v1/D16-1264
|
[11]
|
Joshi, M., et al. (2017) TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. Proceedings of the 55th Conference on ACL, Vancouver, July 2017, 1601-1611. https://doi.org/10.18653/v1/P17-1147
|
[12]
|
Trischler, A., et al. (2017) NewsQA: A Machine Compre-hension Dataset. Proceedings of the 2nd Workshop on Representation Learning for NLP, Vancouver, August 2017, 191-200. https://doi.org/10.18653/v1/W17-2623
|
[13]
|
Dunn, M., et al. (2017) SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine.
|
[14]
|
Shao, C.C., et al. (2018) DRCD: A Chinese Machine Reading Comprehension Dataset.
|
[15]
|
Cui, Y., et al. (2018) A Span-Extraction Dataset for Chinese Machine Reading Compre-hension. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter-national Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, November 2019, 5883-5889.
https://doi.org/10.18653/v1/D19-1600
|
[16]
|
Duan, X., et al. (2019) CJRC: A Reliable Human-Annotated Bench-mark DataSet for Chinese Judicial Reading Comprehension. In: China National Conference on Chinese Computational Linguistics, Springer, Cham, 439-451.
https://doi.org/10.1007/978-3-030-32381-3_36
|
[17]
|
Lai, G., et al. (2017) RACE: Large-Scale Reading Compre-hension Dataset from Examinations. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, September 2017, 785-794.
https://doi.org/10.18653/v1/D17-1082
|
[18]
|
Hill, F., et al. (2015) The Goldilocks Principle: Reading Children’s Books with Explicit Memory Representations.
|
[19]
|
Cui, Y., et al. (2016) Consensus Attention-Based Neural Networks for Chinese Reading Comprehension. Proceedings of COLING 2016, the 26th International Conference on Computa-tional Linguistics: Technical Papers, Osaka, December 2016, 1777-1786.
|
[20]
|
Kočiský, T., et al. (2017) The Narra-tiveQA Reading Comprehension Challenge. Transactions of the Association for Computational Linguistics, 6, 317-328. https://doi.org/10.1162/tacl_a_00023
|
[21]
|
Nguyen, T., et al. (2016) MS MARCO: A Human Generated Machine Reading Comprehension Dataset.
|
[22]
|
He, W., et al. (2017) DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications. Proceedings of the Workshop on Machine Reading for Question Answering, Melbourne, July 2018, 37-46.
https://doi.org/10.18653/v1/W18-2605
|
[23]
|
Yang, Z., et al. (2018) HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro-cessing, Brussels, October-November 2018, 2369-2380. https://doi.org/10.18653/v1/D18-1259
|
[24]
|
Riloff, E. and Thelen, M. (2000) A Rule-Based Question Answering System for Reading Comprehension Tests. Proceedings of the 2000 ANLP/NAACL Workshop on Reading Comprehension Tests as Evaluation for Computer-Based Language Under-standing Systems, Volume 6, 13-19. https://doi.org/10.3115/1117595.1117598
|
[25]
|
Poon, H., et al. (2010) Ma-chine Reading at the University of Washington. NAACL HLT First International Workshop on Formalisms & Method-ology for Learning by Reading, Los Angeles, June 2010, 87-95.
|
[26]
|
Berant, J., et al. (2014) Modeling Biological Pro-cesses for Reading Comprehension. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Stroudsburg, 1499-1510. https://doi.org/10.3115/v1/D14-1159
|
[27]
|
Chen, D., Bolton, J. and Manning, C.D. (2016) A Thorough Examina-tion of the CNN/Daily Mail Reading Comprehension Task. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Volume 1, 2358-2367. https://doi.org/10.18653/v1/P16-1223
|
[28]
|
Wang, S. and Jiang, J. (2016) Machine Comprehension Using Match-LSTM and Answer Pointer.
|
[29]
|
Chen, D., et al. (2017) Read-ing Wikipedia to Answer Open-Domain Questions. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1, 1870-1879. https://doi.org/10.18653/v1/P17-1171
|
[30]
|
Wang, W., et al. (2017) Gated Self-Matching Networks for Reading Comprehension and Question Answering. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1, 189-198.
https://doi.org/10.18653/v1/P17-1018
|
[31]
|
Yu, A.W., et al. (2018) QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension.
|
[32]
|
Rajpurkar, P., Jia, R. and Liang, P. (2018) Know What You Don’t Know: Unanswerable Questions for SQuAD. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Volume 2, 784-789.
https://doi.org/10.18653/v1/P18-2124
|
[33]
|
Yang, Z., et al. (2019) XLNet: Generalized Autoregressive Pretraining for Language Understanding.
|
[34]
|
Mikolov, T., et al. (2013) Efficient Estimation of Word Representations in Vector Space.
|
[35]
|
Pennington, J., Socher, R. and Manning, C. (2014) Glove: Global Vectors for Word Representation. Con-ference on Empirical Methods in Natural Language Processing, Doha, October 2014, 1532-1543.
https://doi.org/10.3115/v1/D14-1162
|
[36]
|
Bojanowski, P., et al. (2017) Enriching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics, 5, 135-146. https://doi.org/10.1162/tacl_a_00051
|
[37]
|
Mccann, B., et al. (2017) Learned in Translation: Contextualized Word Vectors.
|
[38]
|
Peters, M., et al. (2018) Deep Contextualized Word Representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, 2227-2237. https://doi.org/10.18653/v1/N18-1202
|
[39]
|
Vaswani, A., et al. (2017) Attention Is All You Need.
|
[40]
|
Dai, Z., et al. (2019) Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. Pro-ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, July 2019, 2978-2988.
https://doi.org/10.18653/v1/P19-1285
|
[41]
|
Williams, R. and Zipser, D. (2014) A Learning Algorithm for Continu-ally Running Fully Recurrent Neural Networks. Neural Computation, 1, 270-280. https://doi.org/10.1162/neco.1989.1.2.270
|
[42]
|
Hochreiter, S. and Schmidhuber, J. (1997) Long Short-Term Memory. Neural Computation, 9, 1735-1780.
https://doi.org/10.1162/neco.1997.9.8.1735
|
[43]
|
Cho, K., et al. (2014) Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, October 2014, 1724-1734. https://doi.org/10.3115/v1/D14-1179
|
[44]
|
Lecun, Y. and Bengio, Y. (1995) Convolutional Networks for Images, Speech, and Time-Series. In: Arbib, M.A., Ed., Handbook of Brain Theory & Neural Networks, MIT Press, Boston, 255-258.
|
[45]
|
Wang, W., et al. (2017) Gated Self-Matching Networks for Reading Comprehension and Question An-swering. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1, 189-198.
https://doi.org/10.18653/v1/P17-1018
|