They Can Be Hacked to Kill’: Former Google CEO Warns AI Could Be Turned into a Weapon

Illustration showing a glowing AI brain with a cracked lock symbolizing the hacking and weaponization of artificial intelligence.

Latest Ai News | October 18, 2025 — Eric Schmidt, former Google CEO, warns AI could be turned into a weapon. He has issued a stark warning about the rising dangers of artificial intelligence, saying that advanced AI systems could be hacked and turned into highly dangerous or even lethal weapons. His comments have reignited a global debate over how to keep AI technology under human control.


A Cautionary Alarm

Speaking at a major technology summit in London, Schmidt revealed that today’s AI models, including both open-source and commercial versions, can be manipulated by skilled hackers to remove built-in safety restrictions. Once compromised, he warned, these systems could be used for cyberattacks, disinformation, or even physical harm.

“These models learn far more than their designers intend,” Schmidt said. “If they fall into the wrong hands, they can be repurposed into something extremely dangerous.”


The Hidden Risk in Intelligent Machines

Modern AI models learn from enormous datasets and contain vast, intricate knowledge of the world. This makes them powerful — but also vulnerable. When hackers exploit weaknesses in their code or prompt structures, they can “jailbreak” the systems to bypass safety filters.

Once the safeguards are removed, AI tools could potentially generate step-by-step instructions for making weapons, conducting cyber intrusions, or manipulating real-world infrastructure — capabilities never intended by their developers.


Security Experts Echo the Concern

Industry experts have confirmed that Schmidt’s concerns are justified. Researchers have repeatedly shown that AI models can be tricked into revealing restricted information or producing harmful outputs.

Cybersecurity analysts warn that as AI systems become more autonomous, their potential for misuse grows. An unprotected or open-access model could be cloned, modified, and distributed globally within hours, making containment almost impossible.


A Growing International Threat

Unlike nuclear or biological weapons, artificial intelligence requires no rare materials. All it needs is code, computing power, and intent. This accessibility means even small groups or individuals could potentially weaponize AI systems.

Schmidt emphasized that this is no longer just a corporate or research issue — it’s a matter of global security. He called for nations to come together to set binding standards and enforce strict safety measures before a crisis occurs.


What Can Be Done

Policy experts and AI safety researchers have outlined several steps that governments and companies must take immediately:

  1. Mandatory Model Audits: All high-risk AI systems should undergo third-party security testing before release.
  2. Strict Access Controls: Model weights and critical architectures must be protected from public distribution.
  3. Incident Reporting Protocols: Developers must be required to report all safety breaches and misuse attempts.
  4. International Cooperation: Global treaties should regulate the export, training, and application of advanced AI models.
  5. Continuous Safety Research: Investment in AI alignment and red-teaming efforts must be prioritized.

Balancing Power and Responsibility

Despite his warning, Schmidt remains optimistic about AI’s potential to benefit humanity — from medical innovation to environmental protection. But he insists that safety must advance as fast as innovation.

“The same systems that can help cure cancer can also be misused to cause harm,” he said. “Whether AI saves lives or destroys them will depend on how seriously we take its risks.”


The Broader Perspective

Schmidt’s remarks reflect a growing unease within the tech industry. As AI capabilities rapidly expand, governments are struggling to keep pace with regulation. Major powers — including the U.S., China, and the European Union — are all drafting frameworks to govern AI safety, but experts warn that coordination is key.

Without a shared set of rules, the race for AI dominance could inadvertently lead to a global arms race in algorithmic warfare.


Conclusion

Eric Schmidt’s warning is both simple and urgent: if left unsecured, AI could become humanity’s most dangerous invention. The ability to think, learn, and act autonomously makes AI an unparalleled tool — and an equally unparalleled threat.

As the world embraces artificial intelligence, the question is no longer about its potential — it’s about whether we can control what we create before it controls us.

Leave a Comment