
A massive, artificial intelligence-powered cyberattack has exposed critical vulnerabilities in the content moderation systems of the Chinese short-video platform Kuaishou, experts warned on Tuesday after the platform was flooded by tens of thousands of illicit livestreams.
The coordinated assault, which began at around 10 pm on Monday, underscores that human reviewers alone are no match for automated attacks. Cybersecurity experts are now calling for the adoption of advanced AI tools to bolster defenses against increasingly sophisticated digital threats.
On Monday night, attackers deployed approximately 17,000 bot accounts to broadcast pornography, and violent material. The coordinated 90-minute siege bypassed security systems and compromised user data, forcing a total shutdown of the app's livestreaming section, according to the cybersecurity firm Qi-Anxin, which tracked the onslaught.
READ MORE: Video generation AI creating new niche
The company said in a statement on Tuesday that the attack overwhelmed Kuaishou's moderation defenses, with assailants using prerecorded videos containing explicit and violent content.
Faced with the escalating violations, Kuaishou initiated a blanket shutdown of all livestreaming channels. By midnight, the platform's livestreaming section appeared cleared out — including legitimate content.
Kuaishou confirmed in a statement on Tuesday that it had been maliciously targeted by cybercriminal groups and that emergency measures were taken to address the breach.
Industry analysis indicates that the incident was a highly coordinated, premeditated external attack. The assault displayed clear hallmarks of automation, with a massive number of newly registered accounts launching simultaneous broadcasts.
Following extensive system repairs, Kuaishou restored its livestreaming function and said that all other services remain unaffected. The company said it has reported the incident to relevant regulators and public security authorities and will pursue legal remedies to safeguard its interests and those of its shareholders.
Experts note that illicit operations are becoming increasingly stealthy and industrialized, often leveraging AI to spread disinformation.
Francis Fong Po-kiu, honorary president of the Hong Kong Information Technology Federation, said the attackers likely used easily accessible, automated tools to bypass conventional security measures.
"They likely employed automated services to solve CAPTCHA puzzles — those 'prove you're human' tests — and hid their locations using networks of hacked home devices," Fong said.
This approach allowed them to generate thousands of fake accounts almost instantly. The streams themselves, Fong added, were likely not live humans, but AI-generated faces or looped recordings designed to evade detection during high-traffic periods.
To counter such threats, Fong advocates for a "zero-trust" policy, where every new account is treated as a potential threat. Access would be restricted until a user's behavior — such as normal interaction patterns — verifies their legitimacy. He also highlighted the importance of advanced AI detection systems capable of analyzing video, audio, and text simultaneously to spot coordinated anomalies.
Experts also pointed to risks from insider data leaks, stolen accounts, or unauthorized operations, emphasizing the need for stricter internal controls and access management.
Fong emphasized strict access controls, ensuring employees have only the permissions essential to their roles, with multi-person approval required for sensitive actions. Continuous monitoring for unusual employee behavior — such as logins outside working hours — is equally critical to mitigating insider risks.
Li Huaisheng, a cyber law expert at China University of Political Science and Law, said the attack on Kuaishou exemplifies a sophisticated "CC attack", which simulates legitimate users to exhaust a platform's processing resources.
"It's like sending 100 pretend customers to overwhelm a clerk with questions until real customers can't be served," Li explained.
READ MORE: Platforms punished for lack of oversight
He warned the incident reveals a dangerous gap between automated hacking tools and manual, passive defenses.
"We must shift from rigid to resilient protection," Li said, advocating for AI-driven systems that can counter automated scripts. He also challenged the assumption that large platforms are inherently secure, noting cybersecurity is a continuous and costly investment some may neglect.
Li added that platforms that fail to meet mandatory security obligations could be viewed as both victims and administrative violators.
Contact the writer at lilei@chinadaily.com.cn
