If you’ve spent any time scrolling through social media lately, you’ve likely seen them. The jarring, all-caps headlines plastered over a grainy, somber image: “LEGENDARY ACTOR ROBERT REDFORD DEAD AT 89” or “FAREWELL TO SINGING ICON CATERINA VALENTE.” For a heart-stopping moment, you believe it. Then you click. The video is a mishmash of old clips, the “news report” is from a channel you’ve never heard of, and the story is riddled with inconsistencies.
This isn’t journalism. It’s a meticulously crafted deception, and it’s flooding our feeds. The recent coverage following the actual passing of Caterina Valente was a case study in this digital morbidness. Despite her family’s explicit wishes for a private farewell, YouTube was saturated with videos claiming to show her “funeral,” accompanied by entirely fabricated biographies and cause-of-death stories.
This raises a disturbing question: What kind of ecosystem have we built online where the exploitation of grief and the blatant lying about a living person’s death is not just allowed, but algorithmically rewarded?
It’s Not “Fake News.” It’s Fraudulent Entertainment.
Let’s be clear from the outset. Framing this as a “free speech” issue is a fundamental misdirection. The First Amendment protects citizens from government censorship; it does not protect individuals or corporations from the consequences of spreading harmful, fraudulent content on private platforms.
This isn’t political commentary or satire. This is a commercial enterprise built on a foundation of lies. The objective is not to inform or even to persuade—it’s to trigger an emotional response strong enough to generate a click. That click translates into ad revenue, often only pennies per view, but which scales into a significant business model when you produce hundreds of videos with automated voice-overs and stolen footage.
The harm is multifaceted:
Where is the Line? And Who is Responsible for Drawing It?
A common reaction to this predatory content is a call for government intervention. The sentiment is understandable: “The authorities should ban this!” However, mandating that the government decide what is “true” or “false” in public discourse is an incredibly dangerous path, one with historical precedents that should make us all cautious. The cure could be far worse than the disease.
The more pertinent question is for the platforms themselves. YouTube, Facebook, TikTok, and X (Twitter) are not public utilities; they are multi-billion dollar corporations with extensive Terms of Service that already prohibit misinformation, harassment, and spam.
So why does this content persist? The uncomfortable truth is that it generates engagement. The platforms’ algorithms are designed to maximize watch time and clicks, and sensational, emotional content—even if fabricated—performs exceptionally well. There is a perverse incentive structure in place.
Beyond the “Ban Hammer”: Rethinking Digital Citizenship
Simply demanding the government “shut it down” is not a pragmatic or liberty-preserving solution. However, advocating for more robust and responsible systems is. The idea of a “license” for social media, while logistically fraught and smacking of overreach, points to a valid underlying desire: accountability.
Perhaps the solution isn’t a government-issued license, but a shift in platform design that incentivizes verified identity and provenance. What if:
- Monetization was gatekept behind stricter identity verification? If an account wants to run ads, the platform must know exactly who they are, making it easier to cut off the financial incentive for bad actors.
- Algorithmic amplification was denied to unverified sources? A video from the Associated Press could be treated differently by the algorithm than one from “CelebrityNews4455.”
- Platforms invested far more heavily in human moderators who understand cultural context and can spot this specific brand of predatory content?
This isn’t about silencing speech. It’s about removing the financial fuel from the engine of deception. It’s about platforms taking responsibility for the environments they have architecturally designed to favor outrage and lies.
The digital world we inhabit is what we—as users, advocates, and consumers—allow it to be. By refusing to engage with this content, by reporting it aggressively, and by demanding that the multi-billion dollar companies hosting it enforce their own rules, we can begin to drain the swamp.
The desire to be entertained should never come at the cost of our basic humanity. Profiting from lies about life and death isn’t entertainment. It’s predation. And it’s a stain on our digital world.
Disclaimer: This blog post is an opinion piece based on observable online trends and reported events. It is intended to stimulate discussion on media ethics and platform accountability. All claims are made within the context of fair comment and public interest.
