Advertisement

The making of a vaccine misinformation meme

A rubber stamp.
A rubber stamp. Illustrated | iStock

If you are Very Online, or simply possessed of an active Twitter or Facebook account, you have undoubtedly seen a false meme. Quotes wrongly attributed to historical figures, fabricated political "facts" presented by a bespectacled Minion, pixelated, context-free screenshots of ostensible news articles or peer-reviewed studies or statements from politicians — it's all out there, and you've probably encountered it, whether or not you realized at the time.

Have you ever wondered how this stuff gets started? Sometimes it's deliberate, of course. Some people deliberately lie and obfuscate on social media. That's how the QAnon conspiracy movement began — some person(s) started telling lies on the 4chan message board — and it's the form a lot of foreign election meddling has taken.

Sometimes, however, misinformation is made more organically, less by deliberate deception than by some mix of gullibility, sincere worry, and perhaps a bit of trolling gone awry. These origin stories usually go undiscovered, because by the time most of us see the meme, it has already spread so far that its roots are hidden by its growth. But this week, I chanced to see one such genesis happen in real time. This is the story of the making of a vaccine misinformation meme.

We begin with Sam Ghali, a doctor and assistant professor of emergency medicine in Florida. He has a verified Twitter account with a large following, and last Thursday, he posted this tweet:

His account of intensive care unit (ICU) patients regretting their failure to get a COVID-19 vaccine was clearly written to encourage vaccine acceptance, and the post began to go viral.

The next day, Friday, another user quote-tweeted Ghali's post:

"Copy pasta" is an alternate spelling of copypasta, internet slang for a block of text copied over and over online (and often presented as one's own writing, not content copied from elsewhere). It's like a meme, but only text, no image.

I don't know whether this user intended his comment as an accusation or a command, but at least one other person took it the latter way. He quote-tweeted the quote tweet and made Ghali's words a copypasta:

Another user also shared Ghali's words in copypasta style, though given the context — including the themes in his other posts, the fact that he retweeted the original Ghali tweet, and the fact that he doesn't follow the guy who said "copy pasta" — my guess is he simply intended to share the story and did so in an awkward way. But regardless of intent, he posted it:

A third user also shared Ghali's words a little later that evening, just past midnight, and my educated guess is that was a deliberate copypasta. I can't be sure, though, because his account is currently private, so I've only seen that tweet in screenshots.

Mere minutes after the private, post-midnight share, an early Sherlock was on the case. He started a thread documenting the replicating tweets very early Saturday morning:

That thread includes all four tweets available at the time: the Ghali original, the quote tweet of the quote tweet, the possibly awkward share, and the now-private share.

And speaking of screenshots, here we come to when I first stumbled upon this nascent meme. It was Monday evening, and someone I follow — I didn't note who it was at the time — somehow interacted with another detective post, and the interaction appeared in my Twitter feed. This post showed images of three tweets (the Ghali original, and the quote tweet of the quote tweet, and the now-private share). The screenshots are cropped so the usernames are partially obscured and only two timestamps are visible, and they're ordered so Ghali's is last, suggesting (whether intentionally or not, I don't know) Ghali posted third, though in reality, his post came first.

This user raises the possibility of misinformation more explicitly than the Saturday morning Sherlock, and when I first looked at the screenshots, I was troubled. Was it some sort of astroturf pro-vaccine message? I'm all for vaccination, but lying to the public is no way to encourage hesitant people to get their shots. It's also deeply stupid in the internet age, when duplication is so easily discoverable.

I scrolled to the replies, hoping for some explanation. The top reply Twitter showed me linked to a federal registry of clinical trials that described a study about vaccine messaging at Yale University:

At first glance, Ghali's post does seem like it could be part of this study. But look deeper and you'll notice there's no suggestion this research is being done in the wilds of Twitter, with prominent, verified users using hundreds of thousands of people as guinea pigs. On the contrary, the page describes a study conducted with "recruited" participants. That is, it's being done in a controlled setting with people who know they're participating in messaging research.

And think about it: If Ghali's post were part of this study, how would the researchers collect their data? They need to know how respondents felt about vaccines before encountering the message and whether the message in question changed their thinking. There's no way to collect that data by just dumping copypastas on Twitter. Ghali's post isn't connected to the study, nor are its copies.

But the screenshot post has 12,000 likes and 6,000 retweets as of this writing, and the study suggestion is still showing up as a top reply with hundreds of likes. As posts like these gained traction, people who apparently believed they'd found a disinformation campaign or nefarious propaganda research began sharing the copypasta themselves, sometimes leaving its text intact and sometimes tweaking it a little to indicate it was a copypasta or to more explicitly insult vaccine advocates. At least one used it to promote an indie band by glomming onto a much-searched phrase. Here's an example of each variant, in that order:

At the same time, more screenshot aggregations started going around, like this one:

The meme also jumped over to Facebook in this aggregate form. Here's one example:

I tracked down all six of the users in this collection of screenshots on Facebook. Three of them quote-tweeted prior copypastas when they posted their own, a clear giveaway of their intent in sharing. From that and other context on all six profiles, it appears every single one shared the copypasta to troll or dilute what they thought was a pro-vaccine disinformation attempt.

By "dilute" I mean two things: First, if the same post is repeated just a couple times and people only see one incidence of it, they may believe the message and get the vaccine. But if they see it hundreds of times, they know something's fishy, even if they can't determine exactly what's wrong. The ostensible disinformation campaign is thus rendered ineffective if more people participate.

Second, if these users and others like them put themselves in the search stream for this meme, other people curious about it may click through to their profiles. Then, in many cases, those people will be exposed to anti-vaccine commentary and the supposed propaganda will be further undermined. This means the Facebook user alarmed by the copypasta and the Twitter users whose posts she screenshotted are on the same side. They are the ones perpetuating the imaginary propaganda campaign they fear.

So now the meme is spreading all over. Twitter has possibly taken notice, because the site appears to limit how many iterations of the copypasta you can find using Twitter's internal search. I can only bring up a couple dozen of these tweets if I search for the latest posts that include "We are officially back to getting crushed by COVID-19." They're all low-engagement copypastas from within the last 10 hours. Ghali's original only shows up in the "top" results tab, and many of the posts I've shared here don't seem to appear in any of Twitter's search results.

This might just be an automatic function of how Twitter's search code handles duplicative posts, but notice how that restriction has done nothing to keep the misinformation from spreading. On the contrary, it's made it more difficult to reconstruct what happened and discover where the falsehood began. Had I not happened to pitch this story to my editors on Tuesday, including almost all the links to relevant posts before Twitter started this seeming crackdown on the copypasta, I would've had a much more difficult time telling this story of how the misinformation was made.

The last two tweets I'll share are the funniest part of this whole thing, provided you're able to muster a chuckle at epistemic crisis. First, replies to Ghali's original post now include accusations that he is a fake doctor who copied the copypastas that actually copied him (or, perhaps, that he set up sockpuppet accounts to propagate his own propaganda):

And finally, remember the Twitter user who quote-tweeted the "copy pasta" quote tweet? He later noticed the copypasta spreading and reconsidered what he'd done — no, not the way you're hoping:

He ruefully confessed his unwitting participation in "an active psyop" he helped create.

You may also like

CNN airs incredibly explicit and threatening voicemail D.C. officer received during Jan. 6 testimony

Why Tom Brady's 'gentle' roast of Trump at Biden's White House was actually 'deeply vicious'

Why some critics think the CDC's messaging on masking is 'astonishingly bad'