by Elise Karinshak and Dr. Yan Jin,
Grady College of Journalism and Mass Communication, University of Georgia
Emerging technologies are revolutionizing digital communication. Artificial intelligence has introduced novel tools for content generation (both visual and verbal)—ChatGPT recently took the world by storm, with people using it for everything from writing emails to code. Algorithms are also determining how infinite amounts of information are disseminated through digital channels (e.g., recommendation algorithms driving content discovery such as TikTok’s For You page). As technology ushers in a new era of possibilities, it also presents novel challenges.
While disinformation presents a timeless challenge, emerging technologies are redefining its scale, presentation, and impact. At the EUPRERA 2022 Congress in Vienna, Austria, we presented our research investigating disinformation management efforts in an evolving information environment (specifically in the context of organizational efforts).
Through cross-disciplinary literature review, we derive the following characteristics differentiating AI-driven disinformation from previous forms:
- the ability of malicious actors to disseminate and engage with a large volume of content through the coordination of automated systems and
- an inability (of both humans and AI) to perfectly detect synthetic content and actors.
Our proposed framework relies upon the concept of influence, defined as impact of engagement behaviors (to include posting, sharing, commenting, liking, etc.) on actors’ subsequent opinions and beliefs. We posit that future digital communication efforts will become increasingly reliant upon influence (as opposed to volume of engagement). This contextualization introduces important implications for practitioners, such as developing credibility among stakeholders and the release of corroborating content among trusted, diverse sources.
Such characteristics also highlight platform-level limitations of current social networking and media tools. Many algorithms reference engagement metrics when amplifying content; however, relying on engagement as a proxy for endorsement fuels divisive content and creates increasing vulnerability to automated distortion. Additionally, many of the details of these platforms are surrounded by ambiguity; the central role of these platforms in modern information dissemination demands increased mechanisms for accountability, transparency, and consistency.
There is a pressing need for continued research regarding disinformation management strategies in AI-driven contexts and effective interface and algorithmic design. We look forward to refining our new framework via empirical examinations and research collaboration among scholars and practitioners in pursuing the development of information environments that support truth-oriented conversations.
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., … & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
Buchanan, B., Lohn, A., Musser, M., & Sedova, K. (2021). Truth, lies, and automation. Center for Security and Emerging Technology, 1(1), 2.
Karinshak, E., Liu, S. Park, J. S., & Hancock, J. (2023). Working with AI to persuade: Examining a large language model’s ability to generate pro-vaccination messages. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 116 (April 2023), 27 pages. https://doi.org/10.1145/3579592
Jin, Y., Austin, L., & Liu, B. F. (2022). Social‐Mediated Crisis Communication Research: How Information Generation, Consumption, and Transmission Influence Communication Processes and Outcomes. The Handbook of Crisis Communication, 151-167.
Lewandowsky, S., & Kozyreva, A. (2022, April 7). Algorithms, lies, and social media. Nieman Lab. https://www.niemanlab.org/2022/04/algorithms-lies-and-social-media/