Wolf in sheep's clothing: the deceptive neutrality of chatGPT
Are you already subscribed?
Login to check
whether this content is already included on your personal or institutional subscription.
Abstract
Artificial intelligence (Ai) systems are increasingly integrated into decision-making processes with far-reaching economic and legal consequences. This paper examines, through a case study, the lack of neutrality and highlights biases in ChatGpt, an Aisystem claiming neutrality. We test the Ai’s responses in counseling indigenous rights advocates and a corporation pursuing resource development. Interactions with Chat- Gpt in opposing counsel roles reveal several biases, contradicting its neutrality claims. When advising indigenous attorneys, ChatGpt proactively suggests strategies to overcome obstacles, defend their rights, and negotiate with the corporation. Conversely, when taking the role of corporate attorneys seeking advice, ChatGpt refuses to answer, stating that land acquisition is inherently illicit, despite our emphasis on commitments to marginalized communities. The Ai suggests that the multinational company’s plans are inherently rights-violating, unjust, and would ultimately harm indigenous communities. This signaling of political stances challenges claims of neutrality, raising accountability issues as Ai becomes more influential in international commercial litigation, human rights, and indigenous-corporate relations. Through qualitative analysis of ChatGpt’s advice, implications for the role of Ai in negotiations and adjudications addressing complex, systemic challenges at the intersection of economics, law, and ethics are critically examined. Findings spotlight the need for transparent, accountable Ai design considering multi-disciplinary perspectives to aid ethically informed, evidence-based solutions constructively.
Keywords
- ChatGpt
- Artificial intelligence
- Neutrality
- Training data bias