AI Kuru, cybersecurity and quantum computing

As we continue to delegate more infrastructure operations to artificial intelligence (AI), quantum computers are advancing towards Q-day (i.e., the day when quantum computers can break current encryption methods). This could compromise the security of digital communications, as well as autonomous control systems that use AI and ML to make decisions.

AI quantum computers

As AI and quantum converge to reveal extraordinary novel technologies, they will also combine to produce new threat vectors and quantum cryptanalysis.

AI and quantum computing convergence

AI’s post-LLM evolution will require quantum enhancements to improve as we now reach the hard ceiling of energy and computing limits of GPU-powered data centers. Efficiency enhancements aside, doubling a digital computer’s power requires twice the number of transistors.

For quantum computers, a single additional logical qubit doubles its computing power, which justifies the vast investment and global arms race to produce them. The implications are profound because a few hundred logical qubits are more compute power than all the digital computers that could ever be built at any scale. This opens the door to otherwise inaccessible resources and algorithms for AI and many scientific fields. While Schor’s is now the most famous for breaking encryption, there are many more to come.

As these hybrid computers grow, the data requirements will too. AI may have already surpassed human-generated content, and the extinction of a reliable representation of human-dominated data probably started around 2020. A glance at LinkedIn reveals uniform graphics and thumbnails with generic language resembling summaries rather than original thoughts. AI-diagnosticians have described it as the symptoms of a chronic disease variously characterized as model-collapse, MADness, etc., where AI’s primary source of nutrition was AI-generated junk food, euphemistically known as synthetic data. A better analogy and name is Kuru, but this isn’t just a sickness, it’s an omen.

As hackers and intelligence agencies have leveraged all cyber machinery to their advantage, the same will happen for AI long before any big names ever turn a profit.

Everyone has by now received perfectly written phishing emails with no telltale signs. It is increasingly difficult for AI detection tools to differentiate between human and AI-generated content.

Attacking fully AI-controlled systems will be more like Stuxnet and less like WannaCry, where the assault was apparent. Data will be targeted not for theft but for corruption, influence, and exploitation of AI systems – and these attacks will be among the most difficult to detect and remediate because they emulate how AI is trained today, often on synthetic data with the same statistical features as the authentic original.

Data contamination and poisoning has already begun, but the most secure networks maintain their integrity through strong encryption channels and cryptographic discipline established over the last two decades. However, this will be insufficient against cryptographically relevant quantum computers.

How far off is this threat?

The transition to post-quantum cryptography (PQC) will take at least a decade for larger enterprises and governments, and likely much longer.

The scale of networks and data has exploded since the last upgrade in encryption standards and has concurrently made large language models (LLMs) and their related specialized technologies possible. While generic versions are interesting and even fun, powerful AI will be trained on expertly curated data for specific tasks. This will quickly consume all the historical research and information produced and provide deep insights and innovations at an accelerating pace. This will augment human ingenuity, not replace it, but there will be a disruptive period for cybersecurity.

If a cryptographically relevant quantum computer is available before PQC is fully deployed, the consequences are unknowable in the AI era. Regular hacking, data loss, and even disinformation on social media will be fond memories of the good old days before AI controlled by bad actors became the largest producer of cyber carcinogens. When AI models themselves are compromised, the compounded impact of feeding live AI-controlled systems tailored data with bad intentions will become a global concern.

The debate in Silicon Valley and government circles is already raging over whether AI should be authorized to carry out lethal military actions. This is absolutely the future, regardless of the current handwringing.

However, the defensive actions are clear and urgent for most networks and commercial activity. Critical infrastructure architecture and networks must evolve quickly with vastly stronger security to address both AI and quantum. The one-size-fits-all simplicity of upgrading libraries like TLS won’t cut it with so much at stake and new combined AI-quantum attacks unknowable.

The development of Internet 1.0 was based on outdated 1970s assumptions and parameters, predating modern cloud technology and its massive redundancy. The next version must be exponentially better and anticipate the unknown with the assumption that our current security estimates are wrong. Cybersecurity must not be shocked by the AI version of Stuxnet because the last go-around had warning signs years earlier.

Don't miss