Imminent Dangers of Misinformation in Social Media Generated from Unmoderated LLMs

Published: 13 Jan 2025, Last Modified: 26 Feb 2025AAAI 2025 PDLM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language models, misinformation, fringe social media
TL;DR: This paper highlights the risks of unmoderated LLMs on platforms like Gab AI, enabling large-scale misinformation. Existing detection methods, effective for human content, fail to identify LLM-generated misinformation.
Abstract: The concerns about the deleterious use of LLMs are not just academic. This paper presents current developments in specific social media channels that demonstrate the dangers to society already underway. Certain social media channels have already introduced LLMs for the generation and deployment of socially disruptive content. These channels are outlined which have little to no moderation and consequently are sites for the use of LLMs to generate disruptive content to achieve targeted impact. We highlight examples of the use of LLMs to produce disruptive narratives. The absence of safeguard measures (in e.g. Gab AI), combined with prompting strategies, enables large-scale generation of misinformation by unmoderated LLMs. We provide qualitative and quantitative data showing the challenges. To evaluate available resources in misinformation detection, we compared the performance of two distinct detection approaches on our LLM-generated misinformation from Gab AI. The quantitative results showed a concerning \textit{inability} to detect LLM-sourced material, as opposed to high accuracy detection of human-sourced material. The results expose the major issue of validating content generated by unmoderated or domain-specific LLMs. Although the channels can be "echo chambers," the content is easily broadcast into other channels and presents a threat to societal well-being.
Submission Number: 26
Loading