Josh Tucker: Propaganda is already influencing large language models: Evidence from training data, audits and real-world usage.
Join Joshua Tucker as he reports on a concerning phenomenon: political propaganda influences large languages models (LLMs) through their training data. His five studies provide evidence that Chinese state media affects LLM outputs. His findings suggest that as generative AI spreads, states may have incentives to inject more propaganda into training data, raising concerns about AI’s role in shaping political narratives.