|
[1]
|
张海涛, 张枭慧, 魏萍, 刘雅姝. 网络用户信息检索行为研究进展[J]. 情报科学, 2020, 38(5): 169-176. [Google Scholar] [CrossRef]
|
|
[2]
|
顾迎捷, 桂小林, 李德福, 沈毅, 廖东. 基于神经网络的机器阅读理解综述[J]. 软件学报, 2020, 31(7): 2095-2126. [Google Scholar] [CrossRef]
|
|
[3]
|
Liu, S., Zhang, X., Zhang, S., et al. (2019) Neural Machine Reading Comprehension: Methods and Trends. Applied Sciences, 9, 3698. [Google Scholar] [CrossRef]
|
|
[4]
|
李闪闪, 曹存根. 事件前提和后果常识知识分析方法研究[J]. 计算机科学, 2013(4): 185-192.
|
|
[5]
|
Weston, J., Bordes, A., Chopra, S., et al. (2015) Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. https://arxiv.org/abs/1502.05698
|
|
[6]
|
杨丽, 吴雨茜, 王俊丽, 刘义理. 循环神经网络研究综述[J]. 计算机应用, 2018, 38(S2): 1-6.
|
|
[7]
|
刘建伟, 刘媛, 罗雄麟. 深度学习研究进展[J]. 计算机应用研究, 2014, 31(7): 11.
|
|
[8]
|
Sundermeyer, M., Schlüter, R. and Ney, H. (2012) LSTM Neural Networks for Language Modeling. Interspeech. [Google Scholar] [CrossRef]
|
|
[9]
|
Devlin, J., Chang, M.W., Lee, K., et al. (2018) BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, 4171-4186.
|
|
[10]
|
Radford, A., Narasimhan, K., Salimans, T., et al. (2018) Improving Language Understanding by Generative Pre-Training.
https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-Radford-Narasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035
|