Published: 22:44, April 29, 2026
PDF View
We don’t know enough yet about AI to regulate it
By Michael Edesess

After a relentless bombardment of proclamations over the last two or three years that artificial intelligence will either kill us all or solve all our problems, it was refreshing to read a comment made on April 11 at the inaugural Hong Kong Global AI Governance Conference at the University of Hong Kong.

As reported by the South China Morning Post, the comment was made in a panel discussion by Alibaba Group Holding policy lead Fu Hongyu. “We are in a dilemma that can be called common ignorance,” he said. “We do not know what is going on and where the technology is going.”

Experts are constantly asked to make predictions in their fields, and it is difficult to resist the temptation. But we don’t yet know much about the future of AI. New revelations about what it can or might do emerge almost every week. But so do revelations of its unreliability.

Hence, it is reassuring to have an ongoing series of conferences, discussions, and debates on how it should be governed, such as this month’s HKU conference. But they must remain modest in their prescriptions until more is known.

The latest startling revelation is what Anthropic’s new Mythos model can do. Mythos was able to rapidly identify bugs in widely used internet programs favored by large technology companies like Microsoft, Apple, Google, and Meta. This is of great concern because if malicious actors know about those bugs, they might be able to exploit them to hack into programs, violate the privacy of financial or medical data banks, or threaten national security. Because of that, Anthropic wisely decided not to share Mythos with the public, but only with about 50 large technology firms, so they could find and fix the bugs quickly. These bugs are called “zero-days”, supposedly because when one is discovered, the maker of the software has zero days to fix it.

The kind of thing that Mythos can do — although it is not clear how well yet — has already been done, many times, by human “hackers”, that is, computer programmers who are experts at winnowing out bugs in programs. The cat-and-mouse game between these hackers and large technology companies was documented in New York Times cybersecurity reporter Nicole Perlroth’s fascinating 2021 book, This Is How They Tell Me the World Ends: The Cyberweapons Arms Race. The hackers, who may be independent or work for government security agencies, try to find bugs. If they’re independent and they find a serious bug in a program that a technology company like Microsoft or a national security agency relies on heavily, they could offer to sell the information so the bug can be fixed quickly. There is a huge market for this. Some zero-days carry tremendous value. For example, a zero-day that enables a hacker to take over a user’s phone without the phone owner’s knowledge can sell for millions of dollars.

Right now, the best course of action is to establish an AI industry consortium to develop standards for responsible AI development and application

What kind of government regulation could be put in place to manage this? It is difficult to imagine, especially since it might not be possible to envision future dangers until they occur. The best response is for the AI developer to take immediate action to minimize risk, as Anthropic did.

But what regulatory repercussions could Anthropic’s response have? As I mentioned, individual hackers who find a critical bug in software can be paid millions of dollars to report it, either by a national security agency or by a large technology firm. We don’t know whether Anthropic is compensated for reporting those bugs only to the 50 large technology firms, or for sharing Mythos only with them. But the question arises whether it is unfair to deny access to the software to those companies not included in the original 50. Some existing regulations would suggest that it is. Furthermore, if it had been regulations that mandated that Anthropic must share Mythos only with a trusted subset of technology companies, rather than Anthropic deciding that itself, how could those regulations be written?

There is also the problem that enforcing government-imposed regulations can take time. And as we’ve seen with the case of Mythos, to avoid a serious and present danger, there may be little time for regulation to take effect. And writing such regulations, let alone deciding how to enforce them, is futile if there is only a very limited idea of what they will be regulating against.

Right now, the best course of action is to establish an AI industry consortium to develop standards for responsible AI development and application. These standards would be more flexible and might be implemented more quickly than government regulations. Eventually, as experience is accumulated with this setting of standards, some government regulation might become advisable, perhaps requested by the AI industry consortium itself. This may seem risky because it would amount to the AI industry lobbying for regulation, which could be self-serving. But lobbying is a double-edged sword. On the one hand, it can be self-serving. On the other hand, it is carried out by those who truly know the industry and are most likely to know what regulations would work.

Even though an urgent need to control specific AI capabilities may arise from time to time, such as during the Mythos crisis, attempting to formulate regulations to guard against it would be premature. It is better for governments to wait and carefully monitor developments. In the meantime, governmental regulation should be limited to light-handed, flexible oversight in close cooperation with the industry.

 

The author is a mathematician and economist with expertise in finance, energy, and sustainable development. He is an adjunct associate professor at the Hong Kong University of Science and Technology.

The views do not necessarily reflect those of China Daily.