This science lesson covers a topic that is of great interest to many in the harm reduction wars: the proper interpretation of anecdotes in scientific inference. Consider how often those who have personally benefitted from harm reduction feel the need to say "I am not an anecdote!" Well, you are not, of course, but your story is. And that is fine. It does not mean that it is not informative.
Anecdotes are far more informative than their detractors claim, but much greater care is needed to interpret them compared to simpler sources of information, like statistics from a representative survey. It requires real scientific thinking, something that is woefully absent in and around public health topics.
I am using "anecdote" to refer to any n-of-1 study, an observation where the sample size (number of people in the study, n) is just one. These are called "case studies" by those who want to get a publication out of them. They are "adverse event reports" when that is their content. They are sometimes "case crossover studies" (more about that later). They are sometimes biographies. They are called "anecdotes" by those who wish to denigrate and dismiss them, and thus I choose to use that word here, in an effort to take back the lingo.
Of course, anecdotes can be collected, casually or systematically, but this is still just a lot of stand-alone reports, each with n=1. Anecdotes are usually self-reported, though they may be based partially or entirely on externally-observed information. They vary in scope from a single sentence to pages of details. They can be found in social media posts, journal articles, databases, and casual conversations. They are the most ridiculously under-utilized source of data in social sciences, including public health.
A favorite trope by those who wish to dismiss anecdotal evidence is "the plural of anecdote is not data". This is technically true, but only because the "plural" bit is wrong: a single anecdote is data. Suggesting otherwise is just clueless. At the very least, an anecdote is overwhelming evidence that "person X wants us to believe Y." But, um, you might respond, that is not a scientific question I wanted to assess. That is exactly the point. There is never just one scientific question. The mind boggles as to how anyone can think, "one particular form of anecdote is not very useful for answering one particular question I am interested in, and therefore anecdotes are universally not useful data." As I always quipped to my students when they asked something like "is X a good/better research methodology", the quality of an answer is highly dependent on what question is being asked.
Related to that, you have probably seen those "hierarchies" of sources of evidence that list anecdotes ("case reports") near the bottom. These are completely wrong from top to bottom: they put meta-analyses at the top, even though these are often complete junk science; they randomly order the middle layers based on absolutely no justification; and they place "expert judgement" as the only entry below anecdotes even though expert judgment is the source of all knowledge. Expert judgment is how we turn the available evidence, which never speaks for itself, into knowledge. It is useful to understand that these junk hierarchies were created for one purpose: To tell physicians and medical students to stop using their "professional judgment" about treatment options and instead look to what the systematic research says works best. But it needed to come across as something other than the (accurate) message, "your inexpert judgment is worthless; stop it!" Unfortunately, this motivation was successfully obscured and so non-scientists started thinking that the rankings were universal truths even though they are really utter nonsense.
The reality is that the quality of the answer -- which is to say, of the study methodology -- depends on what question is being asked. Sometimes the anecdotes offer very high-quality answers. Sometimes an anecdote offers far more useful data than a clinical trial.
To make clear how no simplistic assessment of the quality of evidence can be valid, realize that all of the following are true:
-Anecdotes do not help answer some questions we might want to answer.
-Anecdotes are the best data for answering some questions we might want to answer.
-Multiple anecdotes are basically equivalent to a convenience-sample survey, though they tend to contain much deeper information if someone takes the time to look. The surveys just give the illusion of being "more scientific" because someone calculates some statistics.
-Some anecdotes that are cited as being evidence for a particular causal claim provide no information that actually supports that claim. Zero. None.
-Some anecdotes provide the best imaginable evidence in support of a causal claim.
-Self-reported anecdotes present an obvious opportunity for willful or subconscious misrepresentation of the facts.
-However, the same is true for responses to systematic surveys about similar topics. Moreover, it is impossible to do more than guess whether misrepresentation is occurring in surveys, whereas a richer anecdote often facilitates a better assessment.
-Making inferences based the equivalent of anecdotes is common across most sciences (though the "n" in n-of-one is usually not one person), and in some subfields anecdotes are a huge portion of the data. It is really only in medicine-adjacent fields that the silly notion that anecdotes are uninformative appears.
The subsequent parts of this science lesson will expand on these points. If one of these entries seems particularly important or particularly unbelievable to you, please drop a comment and I will endeavor to give it some extra focus.
(As with the previous science lesson tutorial, Part 1 is open access, but Part 2 and thereafter will be premium content for patrons (at any level). If you are not already a patron, please consider becoming one to gain access and help keep me writing here and elsewhere.)