Kangaroos may have something to teach humans about AI. In 2017, Volvo began equipping its vehicles with a camera and radar-based "large animal detection" system. The device was designed to monitor the road ahead, identify large animals, warn the driver if it spots one ahead, and automatically apply the brakes, if needed.
Volvo, a Swedish company now owned by Chinese company Geely, trained the system and tested it on Sweden's roads. The device identified moose, cows and reindeer — the kind of animals a driver might encounter on a Swedish road. But when Volvo tested the device on Australian roads, its AI system failed to recognize kangaroos as large animals, partly because they move by jumping. A device trained to stop a car for a reindeer would not do so if it encountered a kangaroo.
Volvo responded by opening a large animal training center for its cars in Australia, setting an example of how companies can responsibly adapt AI systems to local conditions.
READ MORE: Mobile computers on wheels that can't crash
First, an AI system that is poorly designed or hastily fitted into a vehicle can cause harm. Had Volvo not identified and rectified the problem, some drivers could have relied on the system to identify kangaroos, and ended up causing an accident.
Second, if designed properly, AI-based devices can work wonders. More than 7,000 collisions with kangaroos are reported in Australia every year. A device fitted in a vehicle that can recognize not just static obstacles on the road but also animals that might move onto the vehicle's path can quite literally save lives.
And third, if Volvo had advertised its cars in Australia, claiming the devices fitted in them can detect large animals without ensuring they actually can, it would have risked paying fines and facing lawsuits under the country's advertising law or product liability principles, even in the absence of a special law on the use of AI.
Calls for a specific law on AI have been growing. But those demanding such a law often fail to appreciate three key facts. One, the goal should not be to regulate AI for the sake of regulation, but to minimize any harm that might arise from its use. Two, the use of AI is likely already regulated in some way, as the kangaroo example shows. The fact that an activity may now be AI-assisted does not necessarily mean we need a new law. And three, a law that is too broad may not provide enough guidance for applying AI in specific cases.
True, regulating an AI-based device used on public roads seems appropriate. But if we wish to enact a law to ensure the safety of AI systems used in cars, it would make more sense to entrust the task to the transportation department rather than to a general-purpose AI ministry.
ALSO READ: Are we losing the ability to think due to LLMs?
The transportation department of any country will have far more expertise and experience in ensuring road safety than a regulator that tries to supervise the use of AI in areas such as video recommendation algorithms and AI-based medical research algorithms. An AI regulator alone cannot regulate the huge range of AI applications nor can a general-purpose AI law cover all the complexities of such applications.
Ensuring AI safety is essential to building trust in AI. If consumers suffer due to poorly designed AI or feel the AI system does not have their interests at heart, they will not trust the company deploying that AI system. Consumers across the world want AI systems specifically designed for them. Enterprises developing AI and selling AI-based products are always under pressure to be first to the market. Succumbing to that pressure, such enterprises might offer a product in a new environment before it is fully tested.
At a time when many are still skeptical of the impact of AI on humans, DeepSeek shocked the world by offering its powerful AI model to the world for free. In the light of promising developments, enterprises should take measures to ensure these systems are safe not just in their home country but also in other countries and new environments.
— even where kangaroos roam.
The author is Scott K. Ginsburg professor of Law and Technology at Georgetown University.
The views don't necessarily reflect those of China Daily.