By Taha Yasseri, published in Verfassungsblog, February 2025
Abstract:
The standardization of collective memory involves the processes through which shared memories and narratives are shaped, maintained, and transmitted within a society through formal procedures and often supported by policy.
In the digital age, the advent of artificial intelligence (AI) and Large Language Models (LLMs) like GPT has significantly impacted this phenomenon. While these models offer unprecedented capabilities in processing and generating human-like text, they also pose risks to the diversity and plurality of collective memory.
Here, I discuss how LLMs influence the standardization of collective memory, the potential dangers of their widespread use, and the importance of regulatory frameworks to mitigate these risks.