“At first, the question seemed too open‑ended to be tackled by conventional economics,” admits Charles Jones, a professor of economics at Stanford University. He explains that estimating a precise budget for averting a potential AI catastrophe involves uncertainties that go far beyond typical cost‑benefit analyses.
Despite the challenges, Jones decided to give it a try. He began by identifying the major risks associated with advanced artificial intelligence—such as loss of control, malicious use, and unintended economic disruption—and then evaluated the range of policy tools that could mitigate those threats.
The economist broke down the potential expenditures into three broad categories:
Using a combination of historical analogues—such as the costs of nuclear non‑proliferation programs and large‑scale cybersecurity initiatives—Jones arrived at a rough figure of between $50 billion and $200 billion per year for the next decade. He emphasizes that this range is highly provisional and that the true figure could shift dramatically as the technology evolves.
Jones stresses that the goal of the estimate is not to set a definitive budget but to spark a serious conversation about the level of resources societies are willing to allocate to safeguard the future. “If we ignore the problem, the cost of a disaster could be astronomically higher,” he warns.
The economist calls for a collaborative, interdisciplinary approach that brings together AI researchers, economists, ethicists, and legislators. By doing so, he believes we can refine the cost estimates, design effective safeguards, and ultimately steer AI development toward a beneficial outcome for humanity.
Bu makale gerçekten düşündürücü. Yapay zeka konusundaki riskleri hiç düşünmemiştim. Tahmini maliyetler çok yüksek görünüyor ama belki gelecekte daha büyük felaketlerin önüne geçebiliriz.