Listen to this article here
Getting your Trinity Audio player ready...

This past weekend in Tulsa, a voice that sounded like my former teacher and longtime mentor, Representative John Waldron, surfaced in a viral audio clip that painted him as someone deeply out of step with the values he’s spent his life embodying. But it wasn’t him. It wasn’t his voice—not really. It was a deepfake.

Within hours, it spread across social media. People reacted, lines were drawn, and opinions were cemented. Only later did the truth begin to surface: this audio wasn’t just misleading. It was synthetic and generated by artificial intelligence.

As someone who works every day at the intersection of society and AI, I knew immediately what I was hearing. The cadence was off. The transitions unnatural. It sounded possibly familiar, but ultimately not convincing. And I knew what it meant: we’ve crossed a threshold.

This is no longer theoretical. AI is not coming. It’s here.

Advertisement

And if we don’t put our collective foot on the gas not just to learn it, but to lead with this new technology, our voices, institutions, relationships, and even our elections will be subject to manipulation by forces we can’t even trace, let alone trust.

The Era of Synthetic Reality and Other Societal Disruptors 

Deepfakes are AI-generated media, audio, video, or images designed to imitate real people so convincingly that they can fool the human eye and ear. But the danger goes far beyond fake videos. AI is also writing malware, impersonating customer service agents, flooding the internet with disinformation, and optimizing scams at an unprecedented scale. 

Autonomous agents are being trained to manipulate financial markets, algorithmic bias is quietly reinforcing systemic discrimination in hiring and housing, and AI-generated content is overwhelming platforms that once helped people organize for justice. 

In short, deepfakes are just the tip of the spear. What we’re dealing with is a civilizational disruptor. One that is already reshaping the rules of trust, authority, and social order. That is why we must take action.

Advertisement

Let’s be clear: this incident with Rep. John Waldron wasn’t just a smear campaign; it was a microcosm of the kind of stress tests that democracies across the world will face. And this wasn’t even anywhere close to the most sophisticated example.

From Boardrooms to Bedrooms to Ballots: The Global Toll of AI-Generated Deception

In March 2024, a multinational executive at the British engineering firm Arup in Hong Kong authorized a $25 million transfer after attending a video call with his CFO and several colleagues. The problem? Every single person on the call, including the CFO, was a deepfake. The scammer had used AI to clone their faces and voices. The money disappeared in minutes. That’s not a phishing email. That’s a synthetic boardroom robbery.

In August 2024, in South Korea, female streamers were horrified to discover their likenesses had been non-consensually inserted into explicit AI-generated videos (deepfake pornography), downloaded and shared across private messaging groups via Telegram. 

Lives upended. Reputations shattered. Schools nationwide were hit, illustrating how deepfakes can be weaponized to inflict significant personal and psychological harm.

Advertisement

In Slovakia, just days before the country’s 2023 election, a deepfake audio clip surfaced of a leading candidate discussing vote-buying. It went viral. There wasn’t enough time to debunk it before voters went to the polls. The disinformation worked. Democracy lost.

Each of these examples feels like the plot of a dystopian novel. They aren’t. They’re datapoints in the real-time unraveling of public trust and personal safety at the hands of generative AI. And they’re happening not because AI is inherently evil, but because society is unprepared.

We’re Not Ready. But We Can Be

At Black Tech Street and through our Greenwood AI Center of Excellence (G-ACE) via Tech Hubs, we’ve been building a model for what it looks like when a society prepares for AI, not as a luxury, but as a civilizational mandate.

We’ve begun training hundreds of everyday people through our ASPIRE program to see AI not as a mystery, but as a tool. We’re working with city leaders to explore how AI can help address the very real disparities in wealth, health, and education that still plague places like Greenwood. And we are collaborating with technologists to think seriously about AI’s role in security, democracy, and resilience.

Advertisement

But let’s be honest: we’re still outliers. The average American doesn’t know how easily their voice, image, or writing can be cloned. Most businesses don’t have protocols for detecting synthetic fraud. Most school districts haven’t grappled with what it means for a student to submit an AI-written essay or an AI-generated threat.

We need a mass mobilization, not just of training programs, but of public consciousness. Just as past generations built fire departments, air traffic control, and nuclear non-proliferation protocols, we now need AI fluency, AI ethics, and AI governance embedded in every institution from block clubs to ballot boxes.

This is not just about preparing for the next scam. It’s about creating a society that understands the power of AI deeply enough to use it for good and detect when it’s used for harm.

The New Standard: Truth, Dignity, and Verification

Back to Rep. John Waldron. This man has dedicated his life to education, justice, and mentorship, all of which I saw and benefited from firsthand. That’s why I spoke out, not just as an AI practitioner, but as someone who knows him. That voice wasn’t his. But it could have been believable. And that’s terrifying.

Advertisement

Because next time, the target might not have a track record that speaks for itself. Next time, the fake might come just early enough to alter a vote, cancel a contract, or destroy a life.

What we need now is a new cultural reflex:

  • Don’t trust it just because it sounds real.
  • Don’t share it just because it’s juicy.
  • Ask: Was this verified by someone I trust?
  • Ask: Could this be AI-generated?

And most importantly, ask: “How can we build a world where the truth is stronger than the technology used to distort it?”

The Future Is Still Ours to Shape

I believe in AI. I believe in its ability to extend life, reduce suffering, empower creativity, and unlock prosperity. But only if we treat it as a critical societal infrastructure challenge — not a tech trend.

Advertisement

We need visionary and forward-thinking legislation, AI education in every school and media literacy embedded in every campaign. And we need everyday people to see themselves not as victims of this technology, but as authors of its story.

Tulsa, Greenwood, America. We can do this. But we must start by accepting the truth: the age of synthetic reality and deception is already here. The question now is: Will we respond with fear and denial, or leadership and clarity? For the sake of our voices, our democracy, and our dignity. I pray we choose the latter.

Footnote: Public Deepfake Detection Tools You Can Use Today

While no tool is perfect, here are six publicly accessible platforms and resources that can help individuals identify AI-generated or manipulated media:

  1. Resemble Detect – Industry-leading tool for detecting AI-generated and cloned voices. Accessible via free trial or API request.
    resemble.ai/detect
  2. Deepware Scanner – Free web tool that lets users upload video files or links to detect deepfake artifacts.
    deepware.ai
  3. Hive Moderation – Developer-accessible platform with a free tier for scanning images and videos for synthetic manipulation.
    thehive.ai
  4. Amber Video / Project Origin – Focuses on verifying the authenticity of media through cryptographic provenance rather than detection.
    projectorigin.org
  5. Sensity AI (Threat Reports) – Offers public access to deepfake threat briefings and case studies.
    sensity.ai
  6. InVID WeVerify Plugin – A browser extension for journalists and educators to verify video metadata and detect signs of manipulation.
    invid-project.eu/tools-and-services/invid-verification-plugin

Disclaimer: No single tool can guarantee accurate detection. Deepfake detection is part of an ongoing arms race always combine tool use with critical thinking, cross-verification, and source credibility.

Advertisement

Tyrance Billingsley II is the Founder and Executive Director of Black Tech Street, a Tulsa-based initiative focused on rebirtthing historic Black Wall Street as an innovation economy and building resilient,...