Published: 00:39, March 19, 2024 | Updated: 09:32, March 19, 2024
Let's not lose our heads over artificial intelligence
By Michael Edesess

We have become used to “Googling”. The verb has long since entered the official English lexicon, sanctioned by the US-based Merriam-Webster dictionary and the UK-based Oxford English Dictionary.

Let’s suppose you were researching some topic; for example, the number and severity of malaria cases in Zambia and Uganda. As you have probably become accustomed to doing for some years now, you would have Googled something like “cases of malaria in Zambia and Uganda”. Google should have displayed a list of websites with information on this topic. You could have then looked at five or 10 of these websites and selected a few sentences or paragraphs about the subject from each. If you were not particularly concerned about plagiarism, you could have simply concatenated selected sentences or paragraphs and presented the result as a research paper.

As you did this, the thought might have occurred to you that what you had done might be easily automated. After all, what you did, you did almost automatically, almost without thinking. Anything you can do automatically, almost without thinking, will likely be machine-automatable.

Guess what — it has been done. We’ve seen the result: ChatGPT (which has now been upgraded to GPT-4).

The AI-skeptic Gary Marcus has shown that this is essentially true. He presents examples of output from GPT-4 in which extensive excerpts of text from The New York Times are presented virtually verbatim.

Marcus shows that this is not only true of text generated by OpenAI’s product, GPT-4. OpenAI’s image software, too, he says, “is perfectly capable of verbatim and near-verbatim repetition of sources as well”.

For example, he shows the image produced by GPT-4 when prompted with the single two-word prompt “animated toys”. Says Marcus, it contains “a whole universe of potential trademark infringements”.

In none of these GPT-4 results cited by Marcus is the original credited.

But just as Googling did — and despite the temporary “startle effect” created by large language models like GPT-4 — AI will not change the world all at once. It will grow on us and be absorbed into common practice over time. This should give us plenty of time to study its potential effects on society and technology and to think carefully about what to do about them

The point is not so much that GPT-4 can be sued for copyright and trademark infringements. The New York Times has, in fact, already brought a lawsuit against GPT-4’s corporate creator, OpenAI. The point is that GPT-4 does not represent a giant leap. It is only a small and obvious step from mere Googling, a practice to which, though still astounding when we think about it, we have become thoroughly accustomed, which has become utterly routine.

Nevertheless, the sudden hype over AI precipitated by ChatGPT and GPT-4 has created wild speculation, including that it will kill us all.

AI has acquired the status of memes hyped by people who mimic other people’s hype of them without really understanding what they are or what they are good for, such as, to take one example, blockchain.

The hype, importantly, includes the universally cited assertion that to do AI requires enormous numbers of the most advanced semiconductors and petabytes of data.

But if it could have been done as little more than a small step beyond Googling, as in my example, does it need all that advanced equipment and massive quantities of data to work? Isn’t it just possible that it could work nearly as well, and just as well for most applications — even including some of the most advanced applications — with equipment that is not on the very leading edge of technology but somewhere closer to the trailing edge?

If this is true — and I see no real reason why it couldn’t be — then the vaunted crucial race of China to achieve the level of semiconductor proficiency that the United States and its Western allies have achieved, and the West’s supposedly crucial need to check China’s progress, have been assigned more urgency than we should have been led to believe.

Without a doubt, AI applications run the gamut from a very few that require the utmost in computing power and data — and these might not even be the most critical applications for society; they might include, for example, video gaming — to many that do not need such a high level of computing power or such massive quantities of data. It is essential to make the distinction when we talk about AI.

None of this is to say that AI does not pose potential threats, although “killing us all” is not likely to be one of them. Social media poses threats and has posed severe threats ever since its explosion into the world with Facebook and similar apps, including many dangers that we didn’t anticipate. All countries are wrestling with the problem of how to contain the threats posed by social media. For one thing, social media appears to bear much responsibility for the issues faced by young people these days and by older people who use it to fill a void in their lives.

AI will pose problems and threats, too — some similar to those posed by social media — and we need to explore and try to anticipate what those threats may be and how to address them with public policy.

But just as Googling did — and despite the temporary “startle effect” created by large language models like GPT-4 — AI will not change the world all at once. It will grow on us and be absorbed into common practice over time. This should give us plenty of time to study its potential effects on society and technology and to think carefully about what to do about them.

The author is a mathematician and economist with expertise in the finance, energy and sustainable-development fields. He is an adjunct associate professor at the Hong Kong University of Science and Technology.

The views do not necessarily reflect those of China Daily.