Abstract
With the rapid advancement of AI, there exists a possibility of rogue human actor(s) taking control of a potent AI system or an AI system redefining its objective function such that it presents an existential threat to mankind or severely curtails its freedom. Therefore, some suggest an outright ban on AI development while others profess international agreement on constraining specific types of AI. These approaches are untenable because countries will continue developing AI for national defense, regardless. Some suggest having an all-powerful benevolent one-AI that will act as an AI nanny. However, such an approach relies on the everlasting benevolence of one-AI, an untenable proposition. Furthermore, such an AI is itself subject to capture by a rogue actor. We present an alternative approach that uses existing mechanisms and time-tested economic concepts of competition and marginal analysis to limit centralization and integration of AI, rather than AI itself. Instead of depending on international consensus it relies on countries working in their best interests. We recommend that through regulation and subsidies countries promote independent development of competing AI technologies, especially those with decentralized architecture. The Sherman Antitrust Act can be used to limit the domain of an AI system, training module, or any of its components. This will increase the segmentation of potent AI systems and force technological incompatibility across systems. Finally, cross-border communication between AI-enabled systems should be restricted, something countries like China and the US are already inclined to do to serve their national interests. Our approach can ensure the availability of numerous sufficiently powerful AI systems largely disconnected from each other that can be called upon to identify and neutralize rogue systems when needed. This setup can provide sufficient deterrence to any rational human or AI system from attempting to exert undue control.
| Original language | English |
|---|---|
| Article number | e0181870 |
| Pages (from-to) | 971-983 |
| Number of pages | 13 |
| Journal | AI and Society |
| Volume | 40 |
| Issue number | 2 |
| DOIs | |
| State | Published - Feb 2025 |
Bibliographical note
Publisher Copyright:© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023.
ASJC Scopus Subject Areas
- Philosophy
- Human-Computer Interaction
- Artificial Intelligence
Keywords
- AI policy
- AI regulation
- Decentralized AI
- Existential risks
- Safe AI
- X risks
Fingerprint
Dive into the research topics of 'Developing safer AI–concepts from economics to the rescue'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS