In May 2021, a ransomware attack discreetly made its way into the Colonial Pipeline, which carries almost half of the petroleum to the East Coast of the United States. There were no explosions or invasions, just lines of code from a criminal outfit asking for bitcoin. Within days, panic buying emptied petrol stations from Florida to Virginia, showing how one digital intrusion could bring a superpower’s daily life to a halt. This was not war as the 20th century saw it.
It was a glimpse of the bigger change that is happening now: AI is making the digital world the most powerful force changing global security.
In 1998, Barry Buzan, Ole Waever, and Jaap de Wilde wrote Security: A New Framework for Analysis. They expanded the concept of security from just military threats to include five areas that are all connected: military, political, economic, sociological, and environmental. Their Copenhagen School paradigm was remarkable, but it couldn’t have known how deeply technology, especially AI, would affect and disrupt all five. Five big things are making us rethink what “security” means all the time: globalization, climate change, changes in world politics, the development of non-state players, and advances in technology.
AI is different from the rest of these. It is both the best engine of human progress and the source of threats that the original structure never thought of.
The Deepening Paradox: Danger and Promise Together Artificial intelligence is the most paradoxical technology of the 21st century. It is both the most powerful force for progress and the most frightening threat to our existence. The Promise is huge in scope. According to projections from PwC and McKinsey, the global AI business will add $15–20 trillion to the world economy by 2030. AI is already changing medicine (AlphaFold solved protein folding in months, a problem that had stumped biologists for decades), making global food systems more efficient to feed growing populations, making education more accessible through personalized learning, and modeling climate scenarios with more accuracy than ever before. AI-powered mobile tools are helping millions of people in developing countries get out of poverty by making farming more productive, making microfinance possible, and finding diseases early. The same algorithms that make supply chains work better may also predict and lessen the effects of natural disasters. This is the best dual-use technology for good. The danger, on the other hand, is just as deep and can’t be separated from the promise.
The same abilities that drive advancement also make us more vulnerable:
• Military: Lethal autonomous weapons, also known as “killer robots,” could make it easier for conflicts to happen by allowing swarms to move quicker than people can make decisions.
• In society, deepfakes and algorithmic manipulation make people less trusting of institutions, make divisions bigger, and endanger a sense of community. For example, AI-generated false information has already had an effect on elections.
• Political: AI-powered mass surveillance programs make authoritarian control stronger and democratic sovereignty weaker.
• Economic: If AI power is only in a few companies and countries, it might lead to unparalleled inequality and strategic reliance.
• Environmental: Training huge models takes a lot of energy—more than the electricity use of whole countries—which makes it harder to reach climate targets.
The paradox becomes more pronounced when we consider the alignment problem: sophisticated AI systems might pursue objectives in ways that conflict with human ideals, not because of malevolence but due to flawed optimization. A system designed to maximize efficiency could unintentionally inflict devastating damage. Researchers who study existential risk say that if AI becomes smart enough and isn’t carefully controlled, it might be as dangerous as nuclear weapons or perhaps worse. The people who benefit most from AI’s growth—governments looking for strategic advantage and businesses looking for profits—are the ones who are least likely to secure the technology itself. States see the AI developments of their rivals as a danger, not the underlying talent. Cybersecurity came about to fix holes, but it only deals with symptoms (hacks, breaches) and not fundamental threats (misalignment, unintentional escalation). Elon Musk’s comparison of uncontrolled AI to “summoning the demon,” Eric Schmidt’s warning of a new Cold War with existential stakes, and thousands of scholars signing open letters calling for a stop all show how important this is. But the momentum keeps going. A solvent that breaks down old barriers, AI doesn’t belong in any one part of Copenhagen. The solvent is breaking down the barriers between them, making hybrid threats that the 1998 framework didn’t see coming.
The process of securitization itself changes: dangers are spoken about in corporate labs, open-source communities, and global forums before the usual elites become involved.
The Thucydides Trap and Polycrisis, Made Stronger by AI The US–China competition makes the paradox clear. AI supremacy is the most unstable form of Graham Allison’s Thucydides Trap. Data, semiconductors, and skilled workers are the new strategic resources. Neither side can stop without putting themselves at a big disadvantage, but the race makes it more likely that they will make a mistake. Economic interdependence, once a guarantee of peace, today makes us more vulnerable. Non-state actors, such as tech corporations, research groups, and even individual individuals, have a lot of power and can quickly release game-changing features.
How to Deal with the Paradox: AI Ethics and Governance Frameworks To solve—or at least deal with—this dilemma, we need strong moral principles and systems of government that can keep up with AI’s speed and size.
AI ethics has come together around several basic ideas, which are spelled out in documents like the UNESCO Recommendation on the Ethics of AI (2021), the Asilomar AI Principles (2017), and the EU’s Ethics Guidelines for Trustworthy AI (2019):
• Openness and Clarity: Systems should be able to be checked, and judgments should be clear.
• Fairness and Non-Discrimination: Reducing bias in training data and results.
• Responsibility: Clear lines of responsibility for damage done by AI. • Privacy and Data Protection: Protecting people’s rights in a time when a lot of data is being collected.
• Human-Centricity and Beneficence: Making sure that AI helps people thrive and doesn’t hurt them.
• Safety and Robustness: Technical protections against failure or attacks from enemies.
These ideas are becoming more and more useful. In responsible development, bias audits, impact assessments, and “red-teaming” activities are becoming the norm. Governance frameworks are growing, but they are not all working together. Real-world uses show both progress and problems:
• The European Union AI Act, which went into effect in 2024 and will be fully in place by 2027: The first risk-based law outlaws unacceptable usage (such as social rating systems, starting in February 2025) and makes it very hard for high-risk applications (like recruiting, lending, and law enforcement) to work. By early 2026, general-purpose AI rules will be in place, making models like ChatGPT more open. By August 2026, national sandboxes for testing will be required. Enforcement is getting stronger, but delays in guidance and a proposed Digital Omnibus (which could put high-risk requirements to 2027) show how hard it is to adapt.
• United States: There isn’t a complete federal statute; thus, governance is a mix of executive acts, voluntary commitments, and state creativity. A presidential order from December 2025 calls for less federal authority and fights against state laws that are too strict. But states are in charge: California’s Transparency in Frontier AI Act, which goes into effect in January 2026, requires safety protocols and incident reporting for advanced models. The RAISE Act in New York requires comparable protections. State attorneys general have pushed for enforcement, such as the 2025 settlements in Pennsylvania (AI delays in housing repair) and Massachusetts ($2.5 million fine for discriminatory AI lending). This shows that there is more scrutiny of biased outcomes.
• China: An updated Cybersecurity Law (coming into force in January 2026) includes AI control and focuses on labeling content, stability, and sovereignty. The criteria for generative AI say that security checks and real-name registration are needed. China doesn’t have one big piece of legislation. Instead, it employs pilots and sector-specific rules to keep a balance between innovation and strict supervision.
• Other National Efforts: The AI Framework Act in South Korea (2025) makes it easier to see what’s going on with high-risk systems. Canada’s AIDA is moving toward policies based on risk.
• International Initiatives: The International AI Safety Report 2026, which came out in February 2026, gives a scientific agreement on capabilities and hazards. It builds on earlier meetings in Bletchley and Seoul. The G7 Hiroshima Process, UN advisory groups, and OECD principles all call for further cooperation, but binding accords are still hard to find. Examples of enforcement show that there are significant stakes: the U.S. The Navy bans several foreign AI tools (such as DeepSeek, 2025) because they pose security threats; more and more deepfake laws are being passed to stop non-consensual material; and multi-state AG actions are being taken against damaging AI chatbots. Problems still exist: enforcement is behind innovation, competition makes it hard to work together, and private concentration sets the rules. Good governance must be flexible, involve many stakeholders, and be proactive, concentrating on misuse, misalignment, and systemic threats. Toward a New Way of Thinking The Copenhagen School shed light on the world after the Cold War. In the age of AI, we need a new way of looking at things: one that sees AI as a meta-driver that needs its own analytical and normative tools. Security is no longer only about protecting yourself. AI governance, like climate securitization, ought to be transformative. It should guide progress toward shared benefits while reducing the risk of disaster. The temple created on five pillars in 1998 is still there, but AI is the current that runs through it. This current can either bring it down or lift it up.
Whether AI becomes humanity’s best friend or its worst enemy depends on how well we can deal with the contradiction: accepting AI’s potential while strictly controlling its risks. The digital world is here. AI is the pulsing heart of it all—our most exciting innovation and our most serious duty.
