News and Insights

When Perception Becomes Fabrication: The Urgent Role of Communicators in the GenAI Age

June 2, 2025

Before the rise of generative AI (GenAI), misinformation and disinformation were viewed through an essentially human lens—errors, biases, or fabrications propagated through social platforms, politics or media. While dangerous, their origins had a certain logic: they were born from individual or institutional agendas.

Now, we find ourselves in a new information era where knowledge chaos is no longer limited to human error or manipulation but is accelerated by machines. The rise of GenAI presents a profound new challenge: “What if perception becomes not just reality, but fabrication?”

In this shifting landscape, communicators hold the line between informed decision-making and engineered delusion. As corporate communicators and societal stewards, our responsibility to the truth has never been more urgent.

From Misinformation to Machine Fabrication

In a piece I wrote for O’Dwyer’s last year, I called out the “slippery slope” that leads from misinformation to disinformation. We once understood misinformation as accidental—a misquote, a misinterpretation. Disinformation, on the other hand, was strategic—an intentional act of deception. But when these missteps unite in the GenAI era, the boundaries blur even further. Algorithms don’t differentiate between a well-sourced fact and a convincing fabrication. I also explored this theme in a conversation with PRovoke Media’s podcast, Truth, Trust & Trickery.

Today, AI can hallucinate—a term used to describe when generative models like ChatGPT produce incorrect or completely made-up content. These hallucinations are not deliberate lies; their potential impact is equally dangerous. They reinforce false narratives, misinform the public, and erode trust in institutions.

As communicators, we no longer only counter against misinformation created by humans but battle machine-generated realities that look, sound and feel real.

The Death of Knowledge?

My colleague and long-time friend, John Nosta, a leading mind in cognitive intelligence and GenAI, recently penned a powerful provocation: Knowledge is Dead. In it, he suggests that our traditional concept of knowledge is under siege in a world where AI can generate volumes of plausible but incorrect information.

“The line between what is true and what is computationally likely has blurred,” Nosta writes. “We have substituted truth with plausibility.”

In another Psychology Today article, he warns that AI could become “anti-intelligence,” undermining understanding rather than supporting it. When AI produces information divorced from verified reality, it leads to confusion rather than clarity. This anti-intelligence doesn’t just distort facts—it distorts decision-making.

We’ve entered an age of information simulation, where people no longer ask, “Is this true?” but instead, “Does this sound right?”

Corporate Communicators: Guardians of Trust

This moment calls for a shift in how we view corporate communication and responsible business. What was once perceived as a support function is now a frontline discipline. Communicators are responsible for shaping brand narratives and stewarding societal trust. We are the ones who must ask hard questions about source accuracy, fact validation, and the implications of AI-powered content.

Our mandate is no longer to be accurate—it is to ensure that the content we distribute is responsibly curated, clearly attributed, and rigorously verified. We must evolve from storytellers into truth-tellers.

Hallucinations and Headlines: Real-World Impacts

Consider recent cases where AI-generated content has gone viral, causing real-world consequences. A fake image of an explosion near the Pentagon caused a brief dip in the stock market. AI-generated voice clones have been used in scams to trick family members into thinking loved ones are in distress.

These are not just curiosities of the digital age—they are warnings. The tools we use can deceive as easily as they can inform. And if we do not set and uphold standards, we risk becoming complicit in the confusion.

A New Information Covenant

As communicators, we must adopt a new covenant—an ethical framework for communication in the GenAI era. This includes:

  1. Transparency – Disclose when AI generates or influences content.
  2. Attribution – Cite sources rigorously; help audiences trace the roots of information.
  3. Verification – Use human oversight to vet and approve AI-generated outputs.
  4. Clarity – Avoid overreliance on AI’s ability to generate fluent prose; fluency is not truth.
  5. Correction – Establish rapid response processes to correct errors or misstatements, whether human or machine-made.

We are not powerless, and we must be principled.

Communication is a Public Health Imperative

Misinformation is not just a reputational threat—it’s a health and reputation crisis. We saw its effects during the COVID-19 pandemic. People were confused and uncertain. In this light, communication becomes a public health tool.

Inaccurate information—whether generated by a person or a machine—can erode trust in medicine, science, institutions, and policy. As communicators, our words and messages can either inoculate the public with clarity or infect them with confusion.

The Communicator’s Role in AI Governance

It’s time for communicators to sit at the table in corporate AI governance. The IT and legal teams cannot be the only ones defining how AI is used. Communicators understand audience behavior, language nuance, and narrative impact. We understand context—something AI still struggles to grasp.

Communication professionals must help shape policies that guide AI use, ensuring ethical standards are upheld and public trust is prioritized. We should advocate for tools that support truth, not just efficiency.

Reclaiming the Narrative

What do we do when AI tools fabricate quotes, generate fake data, or synthesize “expert” opinions that never existed? We respond not with fear but with leadership. We own our responsibility to counteract falsehoods with facts, surface real expertise, and elevate the value of human insight.

We are the immune system of corporate and civic dialogue—our vigilance protects society’s organism from disinformation infection. Silence is no longer golden; it emboldens AI (aggregated information) tools sharing incorrect information to become viral dangers to reputation and society.

Final Word: Communication as Conscience

AI enables us to augment implementation. It is not the enemy. But unchecked, it can become a mirror reflecting our digital age’s biases, errors, and excesses. It is up to communicators to be the conscience of this new capability—to direct its use toward informing rather than misleading. Communications have long been the eyes, ears, and ambassadors for institutional objectives and company stakeholders.

The rise of GenAI marks a turning point in the history of technology and communication. It’s a moment that demands we recommit to the truth, not as a marketing slogan, but as an operational imperative. In a world awash with plausible fabrications, trust can be bought and truth is no longer assumed—it must be asserted.

And that responsibility belongs to us.

POSTED BY: Gil Bashe

Gil Bashe