|
[1]
|
李伦(2018). 人工智能与大数据伦理. 科学出版社.
|
|
[2]
|
瓦拉赫, 艾伦(2017). 道德机器: 如何让机器明辨是非. 王小红, 主译. 北京大学出版社.
|
|
[3]
|
闫坤如(2018). 人工智能的道德风险及其规避路径. 上海师范大学学报: 哲学社会科学版, 47(2), 40-47.
|
|
[4]
|
远征南(2019). 人们对自主机器道德决策期望的探索性研究. 硕士学位论文, 杭州: 浙江大学.
|
|
[5]
|
曾慧君(2019). 全球首例自动驾驶汽车致死案: Uber无责. https://www.pcauto.com.cn/news/1508/15088135.html
|
|
[6]
|
Arkin, R. C. (2016). Ethics and Autonomous Systems: Perils and Promises. Proceedings of the IEEE, 104, 1779-1781.[CrossRef]
|
|
[7]
|
Asimov, I. (1950). I, Robot. The Gnome Press.
|
|
[8]
|
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., & Rahwan, I. (2018). The Moral Machine Experiment. Nature, 563, 59-64.[CrossRef] [PubMed]
|
|
[9]
|
Boer, D. (2015). The Robot’s Dilemma. Nature, 523, 24-26.[CrossRef] [PubMed]
|
|
[10]
|
Critcher, C. R., Inbar, Y., & Pizarro, D. A. (2013). How Quick Decisions Illuminate Moral Character. Social Psychological and Personality Science, 4, 308-315.[CrossRef]
|
|
[11]
|
Efendić, E., Van de Calseyde, Philippe, P. F. M., & Evans, A. M. (2020). Slow Response Times Undermine Trust in Algorithmic (But Not Human) Predictions. Organizational Behavior and Human Decision Processes, 157, 103-114.[CrossRef]
|
|
[12]
|
Evans, A. M., & Van De Calseyde, P. P. (2017). The Effects of Observed Decision Time on Expectations of Extremity and Cooperation. Journal of Experimental Social Psychology, 68, 50-59.[CrossRef]
|
|
[13]
|
Everett, J. A. C., Pizarro, D. A., & Crockett, M. J. (2016). Inference of Trustworthiness from Intuitive Moral Judgments. Journal of Experimental Psychology. General, 145, 772-787.[CrossRef] [PubMed]
|
|
[14]
|
Gogoll, J., & Uhl, M. (2018). Rage against the Machine: Automation in the Moral Domain. Journal of Behavioral and Experimental Economics, 74, 97-103.[CrossRef]
|
|
[15]
|
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of Mind Perception. Science, 315, 619.[CrossRef] [PubMed]
|
|
[16]
|
Haidt, J., & Joseph, C. (2004). Intuitive Ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues. Daedalus, 133, 55-66.[CrossRef]
|
|
[17]
|
Killen, M., Rutland, A., Abrams, D., Mulvey, K. L., & Hitti, A. (2013). Development of Intra- and Intergroup Judgments in the Context of Moral and Social-Conventional Norms. Child Development, 84, 1063-1080.[CrossRef] [PubMed]
|
|
[18]
|
Levine, E. E., & Schweitzer, M. E. (2014). Are Liars Ethical? On the Tension between Benevolence and Honesty. Journal of Experimental Social Psychology, 53, 107-117.[CrossRef]
|
|
[19]
|
Levine, E. E., & Schweitzer, M. E. (2015). Prosocial Lies: When Deception Breeds Trust. Organizational Behavior and Human Decision Processes, 126, 88-106.[CrossRef]
|
|
[20]
|
Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice One for the Good of Many? People Apply Different Moral Norms to Human and Robot Agents. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 117-124). ACM.[CrossRef]
|
|
[21]
|
Meder, B., Fleischhut, N., Krumnau, N., & Waldmann, M. R. (2019). How Should Autonomous Cars Drive? A Preference for Defaults in Moral Judgments under Risk and Uncertainty: How Should Autonomous Cars Drive? Risk Analysis, 39, 295-314.[CrossRef] [PubMed]
|
|
[22]
|
Ochs, E., & Izquierdo, C. (2009). Responsibility in Childhood: Three Developmental Trajectories. Ethos, 37, 391-413.[CrossRef]
|
|
[23]
|
Piazza, J., & Landy, J. F. (2013). “Lean Not on Your Own Understanding”: Belief That Morality Is Founded on Divine Authority and Non-Utilitarian Moral Judgments. Judgment and Decision Making, 8, 639-661.[CrossRef]
|
|
[24]
|
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J., Breazeal, C., Wellman, M. et al. (2019). Machine Behaviour. Nature (London), 568, 477-486.[CrossRef] [PubMed]
|
|
[25]
|
Shen, S. (2011). The Curious Case of Human-Robot Morality. Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction, Lausanne, 6-9 March 2011, 249-250.[CrossRef]
|
|
[26]
|
Subburaman, R., Kanoulas, D., Muratore, L., Tsagarakis, N. G., & Lee, J. (2019). Human Inspired Fall Prediction Method for Humanoid Robots. Robotics and Autonomous Systems, 121, Article ID: 103257.[CrossRef]
|