Generative AI systems are causing concern because they give malicious actors the ability to deceive, manipulate, and steal on a large scale. This can have serious consequences for trust, democracy, and society. Examples of misuse range from interfering in elections to creating fake reviews. However, there is more to this issue than meets the eye. Researchers at Google DeepMind and Google Jigsaw, including Nahema Marchal and Rachel Xu, have been studying how generative AI is being misused.
By analyzing over 200 media reports from January 2023 to March 2024, the researchers have identified different types of malicious activities involving generative AI. These activities can be categorized into two main groups: exploiting generative AI systems and compromising these systems to access protected information or perform unauthorized tasks. The exploitation category includes tactics like creating realistic human likenesses for impersonation or generating non-consensual sexual content. On the other hand, compromising the system involves activities like falsifying documents or automating workflows.
Interestingly, the researchers found that most malicious uses of generative AI do not require advanced technological skills. Instead, they rely on readily available AI capabilities. This has led to new forms of communication that blur the lines between authentic and deceptive content. For example, during elections in India, political avatars addressed voters by name in their preferred language, and deepfake videos were used to spread messages and create a positive image of politicians.
Apart from influencing public opinion, a common goal for malicious AI users is to monetize their products. This can involve generating low-quality articles, books, or advertisements to attract viewers and generate revenue. Additionally, there is a disturbing trend of producing non-consensual sexual imagery for commercial purposes, such as selling services to “nudify” women.
While the research sheds light on the misuse of generative AI, it is important to note its limitations. The study is based on media reports, which may focus on sensational examples and overlook other forms of misuse. However, the researchers emphasize the need for a collaborative approach involving policymakers, researchers, industry leaders, and civil society to address this issue effectively.
In conclusion, the misuse of generative AI poses a significant threat to society, and immediate action is required to mitigate its negative impacts. By understanding the tactics employed by malicious actors and raising awareness about the consequences of such activities, we can work towards safeguarding the integrity of communication and upholding public trust.