(Photo Courtesy of CNN) “AI-Generated Image of Kamala Harris”
Collin Gallagher
Connector Contributor
In a world where reality and fiction are increasingly blurred, AI-generated images and content have become powerful tools in the hands of those who seek to influence public opinion. From political campaigns to false advertisements, AI-crafted media is increasingly being weaponized to twist objective facts, create false narratives, and influence public opinion. But how far can this technology go? How convincing can this technology get? And what are the long-lasting implications of false narratives being spread without readily available AI detections?
One of the most notable spaces where AI-generated content takes root is Facebook. With an older demographic that often relies heavily on this platform for news and social interaction, Facebook has become fertile ground for AI-crafted media to flourish. From altered images to entirely fabricated stories, AI-generated content can look startlingly real, making it difficult for users—especially those less tech-savvy—to distinguish fact from fiction.
AI-driven propaganda, like deepfakes, is becoming a bigger problem in politics. For example, after Joe Biden dropped out of the 2024 race, a fake video appeared online showing him saying offensive things and slurring his words. This video, highlighted by PBS News, shows how easy it is to spread misinformation using AI. While some people may see these videos as jokes or imitations, they’re a serious issue, especially for those who rely on platforms like Facebook or X as their main news sources. This is even more of a concern for older people, who might have a harder time telling what’s real from what’s fake. Another case happened in New Hampshire, where an AI-generated robocall imitated Biden’s voice to try to keep people from voting in the primaries. These examples show how dangerous misinformation can be as AI technology advances, with political figures as key targets and platforms like Facebook still lacking strong rules to control this spread of fake information.
According to Sameera Jangala, a content creator for the Student Life office and the university itself, the manipulation of words and appearances through AI-generated content raises a lot of concerns. She pointed out that this technology could be especially harmful to older populations, who “are a lot more susceptible to… just kind of trusting things that they see” on platforms like Facebook. This susceptibility, coupled with AI’s growing ability to create convincing fake images and videos, makes it easier to deceive and manipulate the perceptions of these demographics. Jangala’s concerns reflect the larger challenge of misinformation in an age where AI can effortlessly alter reality.
Research shows that the spread of misinformation is starting to weaken public confidence in trustworthy news sources. In a recent survey, around 64% of people said they believe social media has a mostly negative impact, with misinformation being a major worry. This distrust makes it harder for people to think critically about online content, making them more likely to accept manipulated information as true. Misinformation on topics like elections has already made people question democratic institutions, causing doubts about election fairness and even affecting voter turnout in some places. To tackle these problems, experts recommend stronger regulations, better detection tools, and expanding media literacy education to help people assess digital content more accurately.
The rise of AI-generated misinformation is having a serious impact on society, from lowering trust in the news to shaping public opinion and how people view democracy. As these technologies keep improving, it becomes harder to tell what’s real from what’s fake, putting older people—especially those who rely on platforms like Facebook—at greater risk. The spread of false information highlights the urgent need for solutions, including stronger regulations, better detection tools, and media literacy education for everyone. Without these steps, society could face a future where real and fake news are impossible to tell apart, making informed discussion and democratic decisions much harder. Solving this problem will take a group effort to make sure AI is used for good rather than as a source of confusion and mistrust.