The Ethical Consequences of the “AI-as-Colleague” Narrative in Generative Artificial Intelligence: A Business‑Virtue Governance Analysis Based on Policy Texts
DOI:
https://doi.org/10.65196/tfy2rz47Keywords:
生成式人工智能, 政策分析, 美德伦理, 拟人化, 责任稀释, 组织治理Abstract
In multi-scenario corporate deployments, generative artificial intelligence is frequently packaged as an “AI colleague/assistant”. While such framing can increase collaboration efficiency, it may also trigger responsibility diffusion, weaken prudential judgment, and erode organizational integrity. Grounded in virtue ethics and the concept of organizational virtue, this study employs policy analysis and qualitative content analysis to code and compare China’s relevant governance texts with international frameworks including UNESCO, OECD, the NIST AI Risk Management Framework (AI RMF), and the EU AI Act. We examine how institutional mechanisms—transparent notice, human oversight, risk assessment, and traceable remedies—are institutionalized to promote prudence, responsibility, and fairness. We find that policies generally emphasize “controllability, accountability, and noticeability/explainability”, yet devote insufficient attention to attributional shifts caused by anthropomorphic narratives. We therefore recommend incorporating anthropomorphic-design risks into risk-assessment checklists, strengthening cues that reinforce human ultimate responsibility and internal accountability matrices, and refining requirements for transparency and uncertainty communication in conversational systems.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Journal of humanities and social sciences exploratio

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.