|
[1]
|
Rush, A.M., Chopra, S. and Weston, J. (2015) A Neural Attention Model for Abstractive Sentence Summarization. Pro-ceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, September 2015, 379-389. [Google Scholar] [CrossRef]
|
|
[2]
|
倪海清, 刘丹, 史梦雨. 基于语义感知的中文短文本摘要生成模型[J]. 计算机科学, 2020, 47(6): 74-78.
|
|
[3]
|
Ma, S., Sun, X., Xu, J., et al. (2017) Improving Semantic Rele-vance for Sequence-to-Sequence Learning of Chinese Social Media Text Summarization. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vol. 2, 635-640. [Google Scholar] [CrossRef]
|
|
[4]
|
Devlin, J., Chang, M.W., Lee, K., et al. (2018) BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.
|
|
[5]
|
Vaswani, A., Shazeer, N., Parmar, N., et al. (2017) Attention Is All You Need. Annual Conference on Neural Information Processing Systems 2017, Long Beach, 4-9 De-cember 2017, 5998-6008.
|
|
[6]
|
Wang, Q., Liu, P., Zhu, Z., et al. (2019) A Text Abstraction Summary Model Based on BERT Word Embedding and Reinforcement Learning. Applied Sciences, 9, 4701. [Google Scholar] [CrossRef]
|
|
[7]
|
Wei, R., Huang, H. and Gao, Y. (2019) Sharing Pre-Trained BERT De-coder for a Hybrid Summarization. In: China National Conference on Chinese Computational Linguistics, Springer, Cham, 169-180. [Google Scholar] [CrossRef]
|
|
[8]
|
Liu, Y. and Lapata, M. (2019) Text Summarization with Pre-trained Encoders. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, November 2019, 3730-3740. [Google Scholar] [CrossRef]
|
|
[9]
|
Cui, Y., Che, W., Liu, T., et al. (2019) Pre-Training with Whole Word Masking for Chinese BERT.
|
|
[10]
|
Sun, Y., Wang, S., Li, Y., et al. (2019) Ernie: Enhanced Representation through Knowledge Integration.
|
|
[11]
|
He, K., Zhang, X., Ren, S., et al. (2016) Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 770-778. [Google Scholar] [CrossRef]
|
|
[12]
|
Williams, R.J. and Zipser, D. (1989) A Learning Algorithm for Con-tinually Running Fully Recurrent Neural Networks. Neural Computation, 1, 270-280. [Google Scholar] [CrossRef]
|
|
[13]
|
He, T., Zhang, Z., Zhang, H., et al. (2019) Bag of Tricks for Image Classification with Convolutional Neural Networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, 16-17 June 2019, 558-567. [Google Scholar] [CrossRef]
|
|
[14]
|
Wu, Y., Schuster, M., Chen, Z., et al. (2016) Google’s Neural Machine Translation System: Bridging the Gap between Hu-man and Machine Translation.
|
|
[15]
|
Hu, B., Chen, Q. and Zhu, F. (2015) LCSTS: A Large Scale Chinese Short Text Summarization Dataset. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, September 2015, 1967-1972. [Google Scholar] [CrossRef]
|
|
[16]
|
Lin, C.Y. (2004) Rouge: A Package for Automatic Evaluation of Summaries. Workshop on Text Summarization Branches Out, Barcelona, 25-26 July 2004, 74-81.
|
|
[17]
|
Gu, J., Lu, Z., Li, H., et al. (2016) Incorporating Copying Mechanism in Sequence-to-Sequence Learning. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Vol. 1, 1631-1640. [Google Scholar] [CrossRef]
|
|
[18]
|
Lin, J., Sun, X., Ma, S., et al. (2018) Global Encoding for Abstractive Summarization. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Vol. 2, 163-169. [Google Scholar] [CrossRef]
|
|
[19]
|
Qi, W., Gong, Y., Yan, Y., et al. (2021) ProphetNet-X: Large-Scale Pre-Training Models for English, Chinese, Multi-Lingual, Dialog, and Code Generation. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, 1-6 August 2021, 232-239. [Google Scholar] [CrossRef]
|