治理研究 ›› 2023, Vol. 39 ›› Issue (3): 118-129.

• 社会治理·人工智能专题 • 上一篇    下一篇

人工智能时代的社会公正风险:何种社会?哪些风险?

李猛   

  • 收稿日期:2023-01-03 出版日期:2023-05-15 发布日期:2023-06-05
  • 作者简介:李猛,政治学博士,北京外国语大学国际关系学院副教授。
  • 基金资助:
    国家社科基金重大项目“人工智能伦理风险防范研究”(20&ZD041)

Social Justice Risks in the Age of AI: What Kind of Society? What are the Risks?

Li Meng   

  • Received:2023-01-03 Published:2023-05-15 Online:2023-06-05

摘要:

人工智能导致的社会风险根源于人工智能社会的不公正。为了更加全面认知人工智能的社会公正风险,需要结合马克思主义分析人工智能时代的社会形态:从“生产公正”角度看,人工智能社会是深度自动化生产的社会,这可能导致劳动向下聚集、劳动能力弱化、劳动“分体化”等生产公正风险;从“分配公正”角度看,人工智能社会是物质极大丰富但个体性、空间性、时间性分配严重不均的社会;从“认知公正”角度看,人工智能社会是虚拟与现实结合的社会,可能导致理性认知剥夺、自控能力剥夺、自主选择剥夺的认知公正风险;从“发展公正”角度看,人工智能与人类社会之间的矛盾与张力,可能导致能量争夺、权责失衡和消极反抗等发展公正问题,进而弱化社会追求公正的动力。导致人工智能社会公正风险的根本原因在于世界资源有限性与人类和人工智能需求无限性之间的矛盾、核心诱因在于人类社会固有的不公正问题、最大障碍在于现有治理手段难以直接作用于人工智能领域的责任主体。对此,需要合理划定人工智能发展的能耗标准和比例、着力解决传统社会中的不公正问题、以人的发展为目的穿透人工智能的“责任黑箱”。

关键词: 人工智能, 生产公正, 分配公正, 认知公正, 发展公正

Abstract:

The social risks caused by artificial intelligence (AI) are rooted in the injustice of an AI society. In order to more comprehensively understand the social justice risks of AI, it is necessary to analyze the social form of the era of AI based on Marxism. From the perspective of “production justice”, an AI society is one of deeply automated production, which may lead to production justice risks such as downward aggregation of labor, weakening of labor capacity, and labor “dividuum”. From the perspective of “distributive justice”, an AI society is one of great material abundance but serious unfairness to individuals, in terms of spatial and time distribution. From the perspective of “cognitive justice”, an AI society combines the virtual and the real, which may lead to the risk of cognitive justice, such as rational cognitive deprivation, self-control deprivation, and independent choice deprivation. From the perspective of “development justice”, the contradictions and tensions between AI and human society may lead to development justice problems such as energy competition, imbalance of rights and responsibilities, and passive resistance, all of which can weaken society’s motivation to pursue justice. The fundamental cause of the “justice risk” of an AI society lies in the contradiction between the world’s limited resources and the unlimited demands of human beings and AI. The core incentive lies in the inherent social injustice of human society itself. And the biggest obstacle is that the existing governance methods are unsuited to act directly on the responsible subjects in the field of AI. To this end, it is necessary to reasonably delineate the energy consumption standards and proportions of AI development, focus on solving the unjust problems in traditional society, and penetrate the “black box of responsibility” of AI for the purpose of human development.

Key words: artificial intelligence, production justice, distributive justice, cognitive justice, development justice

中图分类号: