AI-Assisted Crime: China Detains Man for Fabricating Fake News with Chatbot

15 May 2023

In an unprecedented case, Chinese authorities have arrested a man accused of using artificial intelligence to generate and disseminate a fake news story. As the world continues to grapple with the implications of AI technology, it seems that China has taken the lead in tackling AI-assisted crimes, raising questions about the ethical use of these tools.

The man, identified by his last name Hong, was arrested in Gansu Province in northwest China. He allegedly manipulated ChatGPT to produce a fabricated news report claiming a train crash had left nine people dead. Authorities discovered the article had been simultaneously posted on over 20 Baijiahao accounts, a microblogging platform of Chinese search engine giant Baidu, and had been viewed by at least 15,000 people.

This marks the first arrest following China's introduction of laws in January to regulate AI and 'deep fake' technology usage. The Administrative Provisions on Deep Synthesis for Internet Information Service targets technologies that generate text, images, audio, or video, specifically mentioning deep learning models. Although their creation is not outlawed, they must be clearly labeled as AI-generated content.

The police investigation led them to a company owned by the suspect, subsequent to which they arrested Hong, seized his computer, and searched his home. According to the police statement, Hong confessed to using ChatGPT to produce various versions of the fabricated story based on past trending topics in China. He claimed to have done this to generate revenue from clicks after learning about it from friends on WeChat.

In conclusion, Hong's arrest represents a critical turning point in the ongoing global discussion about AI ethics and regulation. While China's strict oversight of AI technology reflects the Chinese Communist Party's desire to control emerging tech, it demonstrates the importance of addressing AI-generated misinformation. As the UK and US governments slowly awaken to the potential problems with AI, this case serves as a stark reminder of the urgent need to establish guidelines for responsible AI use and prevent the spread of fake news.