The Ethical Consequences of the “AI-as-Colleague” Narrative in Generative Artificial Intelligence: A Business‑Virtue Governance Analysis Based on Policy Texts

Authors

  • ZHANG Xufeng Author
  • LI Han Author

DOI:

https://doi.org/10.65196/tfy2rz47

Keywords:

生成式人工智能, 政策分析, 美德伦理, 拟人化, 责任稀释, 组织治理

Abstract

In multi-scenario corporate deployments, generative artificial intelligence is frequently packaged as an “AI colleague/assistant”. While such framing can increase collaboration efficiency, it may also trigger responsibility diffusion, weaken prudential judgment, and erode organizational integrity. Grounded in virtue ethics and the concept of organizational virtue, this study employs policy analysis and qualitative content analysis to code and compare China’s relevant governance texts with international frameworks including UNESCO, OECD, the NIST AI Risk Management Framework (AI RMF), and the EU AI Act. We examine how institutional mechanisms—transparent notice, human oversight, risk assessment, and traceable remedies—are institutionalized to promote prudence, responsibility, and fairness. We find that policies generally emphasize “controllability, accountability, and noticeability/explainability”, yet devote insufficient attention to attributional shifts caused by anthropomorphic narratives. We therefore recommend incorporating anthropomorphic-design risks into risk-assessment checklists, strengthening cues that reinforce human ultimate responsibility and internal accountability matrices, and refining requirements for transparency and uncertainty communication in conversational systems.

Published

2026-02-28

Issue

Section

文章

How to Cite

The Ethical Consequences of the “AI-as-Colleague” Narrative in Generative Artificial Intelligence: A Business‑Virtue Governance Analysis Based on Policy Texts. (2026). Journal of Humanities and Social Sciences Exploratio, 2(2), 39 – 46. https://doi.org/10.65196/tfy2rz47