|
[1]
|
Zhang, Q. and Li, H. (2007) MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Transactions on Evolutionary Computation, 11, 712-731. [Google Scholar] [CrossRef]
|
|
[2]
|
Almeida, J., Alam, M., Ferreira, J. and Oliveira, A.S. (2016) Mitigating Adjacent Channel Interference in Vehicular Communication Systems. Digital Communications and Networks, 2, 57-64. [Google Scholar] [CrossRef]
|
|
[3]
|
Bernstein, A.V., Burnaev, E.V. and Kachan, O.N. (2018) Reinforcement Learning for Computer Vision and Robot Navigation. In: Perner, P., Ed., Machine Learning and Data Mining in Pattern Recognition. MLDM 2018. Lecture Notes in Computer Science, Vol. 10935, Springer, Cham, 258-272. [Google Scholar] [CrossRef]
|
|
[4]
|
Molina-Masegosa, R., Gozalvez, J. and Sepulcre, M. (2020) Comparison of IEEE 802.11p and LTE-V2X: An Evaluation with Periodic and Aperiodic Messages of Constant and Variable Size. IEEE Access, 8, 121526-121548. [Google Scholar] [CrossRef]
|
|
[5]
|
Zhao, Q., Tong, L., Swami, A. and Chen, Y. (2007) Decen-tralized Cognitive MAC for Opportunistic Spectrum Access in Ad Hoc Networks: A POMDP Framework. IEEE Jour-nal on Selected Areas in Communications, 25, 589-600. [Google Scholar] [CrossRef]
|
|
[6]
|
Nasir, Y.S. and Guo, D. (2019) Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks. IEEE Journal on Selected Areas in Communications, 37, 2239-2250. [Google Scholar] [CrossRef]
|
|
[7]
|
Cui, J., Liu, Y. and Nallanathan, A. (2019) Multi-Agent Rein-forcement Learning-Based Resource Allocation for UAV Networks. IEEE Transactions on Wireless Communications, 19, 729-743. [Google Scholar] [CrossRef]
|
|
[8]
|
Wijesiri N.B.A., G.P., Haapola, J. and Samara-singhe, T. (2019) A Markov Perspective on C-V2X Mode 4. 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Honolulu, 22-25 September 2019, 1-6. [Google Scholar] [CrossRef]
|
|
[9]
|
Jeon, Y., Kuk, S. and Kim, H. (2018) Reducing Message Col-lisions in Sensing-Based Semi-Persistent Scheduling (SPS) by Using Reselection Lookaheads in Cellular V2X. Sensors, 18, Article No. 4388. [Google Scholar] [CrossRef] [PubMed]
|
|
[10]
|
Honnaiah, P.J., Maturo, N. and Chatzinotas, S. (2020) Foreseeing Semi-Persistent Scheduling in Mode-4 for 5G Enhanced V2X Communication. 2020 IEEE 17th An-nual Consumer Communications & Networking Conference (CCNC), Las Vegas, 10-13 January 2020, 1-2. [Google Scholar] [CrossRef]
|
|
[11]
|
Heo, S., Yoo, W., Jang, H. and Chung, J.-M. (2021) H-V2X Mode 4 Adaptive Semipersistent Scheduling Control for Cooperative Internet of Vehicles. IEEE Internet of Things Journal, 8, 10678-10692. [Google Scholar] [CrossRef]
|
|
[12]
|
Bonjorn, N., Foukalas, F. and Pop, P. (2018) Enhanced 5G V2X Services Using Sidelink Device-to-Device Communications. 2018 17th Annual Mediterranean Ad Hoc Networking Workshop (Med-Hoc-Net), Capri, 20-22 June 2018, 1-7. [Google Scholar] [CrossRef]
|
|
[13]
|
余翔, 陈晓东, 王政, 石雪琴. 基于LTE-V2X的车联网资源分配算法[J]. 计算机工程, 2021, 47(2): 188-193.
|
|
[14]
|
金久一, 邱恭安. C-V2X通信中资源分配与功率控制联合优化[J]. 计算机工程, 2020, 47(10): 147-152.
|
|
[15]
|
Liang, L., Ye, H. and Li, G.Y. (2019) Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning. IEEE Journal on Selected Areas in Communica-tions, 37, 2282-2292. [Google Scholar] [CrossRef]
|
|
[16]
|
Gupta, J.K., Egorov, M. and Kochenderfer, M. (2017) Coopera-tive Multi-Agent Control Using Deep Reinforcement Learning. In: Sukthankar, G. and Rodriguez-Aguilar, J., Eds., Au-tonomous Agents and Multiagent Systems. AAMAS 2017. Lecture Notes in Computer Science, Vol. 10642, Springer, Cham, 66-83. [Google Scholar] [CrossRef]
|
|
[17]
|
Bazzi, A., Cecchini, G., Menarini, M., Masini, B.M. and Zanel-la, A. (2019) Survey and Perspectives of Vehicular Wi-Fi versus Sidelink Cellular-V2X in the 5G Era. Future Internet, 11, Article No. 122. [Google Scholar] [CrossRef]
|
|
[18]
|
(2011) 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access Network; Evolved Universal Terrestrial Radio Ac-cess (E-UTRA); Physical Layer Procedures.
http://www.arib.or.jp/english/html/overview/doc/STD-T104v1_30/5_Appendix/Rel10/36/36213-a60.pdf
|
|
[19]
|
Zhang, K., Yang, Z. and Başar, T. (2021) Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algo-rithms. In: Vamvoudakis, K.G., Wan, Y., Lewis, F.L. and Cansever, D., Eds., Handbook of Reinforcement Learning and Control. Studies in Systems, Decision and Control, Vol. 325, Springer, Cham, 321-384. [Google Scholar] [CrossRef]
|
|
[20]
|
Hernandez-Leal, P., Kaisers, M., Baarslag, T. and de Cote, E.M. (2017) A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity. ArXiv Preprint ArXiv: 1707.09183.
|
|
[21]
|
Naparstek, O. and Cohen, K. (2018) Deep Multi-User Reinforcement Learning for Distributed Dy-namic Spectrum Access. IEEE Transactions on Wireless Communications, 18, 310-323. [Google Scholar] [CrossRef]
|
|
[22]
|
Schroeder de Witt, C., Foerster, J., Farquhar, G., et al. (2019) Multi-Agent Common Knowledge Reinforcement Learning. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E. and Garnett, R., Eds., Advances in Neural Information Processing Systems 32 (NeurIPS 2019), Curran Associates, Inc., Red Hook.
|
|
[23]
|
Van Hasselt, H., Guez, A. and Silver, D. (2016) Deep Reinforcement Learn-ing with Double Q-Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 30, 2094-2100. [Google Scholar] [CrossRef]
|
|
[24]
|
Foerster, J., Assael, I.A., De Freitas, N. and Whiteson, S. (2016) Learning to Communicate with Deep Multi-Agent Reinforcement Learning. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I. and Garnett, R., Eds., Advances in Neural Information Processing Systems 29 (NIPS 2016), Curran Associates, Inc., Red Hook.
|
|
[25]
|
Li, M., Lu, F., Zhang, H. and Chen, J. (2020) Predicting Future Locations of Moving Objects with Deep Fuzzy-LSTM Networks. Transportmetrica A: Transport Science, 16, 119-136. [Google Scholar] [CrossRef]
|
|
[26]
|
Feng, J., Li, Y., Zhang, C., et al. (2018) Deepmove: Predict-ing Human Mobility with Attentional Recurrent Networks. Proceedings of the 2018 World Wide Web Conference, Lyon, 23-27 April 2018, 1459-1468. [Google Scholar] [CrossRef]
|
|
[27]
|
Sukhbaatar, S. and Fergus, R. (2016) Learning Multiagent Commu-nication with Backpropagation. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I. and Garnett, R., Eds., Advances in Neural Information Processing Systems 29 (NIPS 2016), Curran Associates, Inc., Red Hook.
|