Academic fraud in the media | HBR Talk 140

Last on HBR Talk week we looked at how ideological activists can manipulate research to manufacture data sets in support of false narratives. Much of this is done by either limiting the method by which data is collected in order to exclude information the researcher doesn’t want to consider, or filtering the data through carefully manipulated definitions, standards, and classifications, either to select out that information, or reshape it to fit the researcher’s foregone conclusion. If it sounded familiar, it should - that’s part of the manufacture of propaganda. The question is, what makes it useful as such?

That’d take us right back to media manipulation.

One cute trick is emphasis of female experiences, no matter who is impacted by a particular issue. An example of this is media reporting on homelessness. A well-known meme in the men’s rights community shows a headline or graphic from an unknown article stating, in big bold letters, “1 in 4 homeless are women!”
Yeah? Who are the other 3?

Data from hudexchange.info shows that 2019, approximately two-thirds of homeless people were men or boys, yet the media emphasizes the percentage that are female.

Homeless.org reports that in the UK, 82.8% of people estimated to be sleeping rough in autumn 2019 identified as male, but they had a weird way of showing it. Their displayed graph stated that out of the 4266 total, “614 people identified as female.” What about the other 3652?

One would think that the issue of homelessness has such an impact that all affected demographics would be important… but if one were going to focus on a single gender demographic, why the minority instead of the majority? What makes homeless women more important than homeless men? 

Another trick is reporting on research related to social issues from a perspective that treats ideological viewpoints as universal truths, using political buzzwords as shortcuts to communicate that, such as referring to housework as “unpaid labor” or stating unequivocally that there IS a “wage gap” when reporting on research into differences in earnings between male and female workers. There is a difference between this and the use of technical jargon, which can be confusing but is not deliberately manipulative. Technical jargon is designed to accurately describe the thing it is used to name. For instance, in medical care, there is a tool for documenting and tracking patients’ medicines as administered by staff. It’s called the Medicine Administration Record, often referred to by the acronym M.A.R., pronounced “mar” by most staff I’ve worked with.
That’s jargon.

Buzzwords are not jargon. Rather than being characterized by simple accuracy, they are used to portray as valid assertions that are unproved, hiding the questionability of the concept behind a catchy label. 

We know the term “wage gap” hides the impact of differences in work-related choices on earning capacity. The term “unpaid labor” hides the false comparison between work you do for yourself during your time off, and work you do for your employer. That label is used to make it sound reasonable to take housework into consideration when discussing what constitutes “equal work” on which equal pay must be based, as if your employer should factor that into your pay rate. It bypasses the lengthy and convoluted arguments it would take for the writer to persuade you to accept that idea as a valid argument.

Another means of treating questionable claims as accepted truths is by publishing a narrative about a particular person or concept without linking to any real source material. Instead, the media refers to self-proclaimed experts on the topic, such as in establishment media’s representation of Paul Elam’s “If you see Jezabel” article. Instead of linking to the article and discussing its contents, writers cite existing feminist articles about it. Tracing the links from article to article never leads to a direct citation of Paul’s original work, but instead to a blog post by David Futrelle, whose take on the article is a twisted, context-ignoring hack job intended to smear rather than to address anything Paul actually said. This style of smear campaign was directed at researcher Murray Strauss after he began studying gender symmetry in partner violence, a topic that offended feminist researchers. His work was labeled controversial and buried under a mountain of editorial arguments citing other editorial arguments, without substance and without direct reference to the research itself.  

Similarly, writers may refer to studies without actually naming or linking to them, to bolster a narrative the study may not support, or to shield questionable research from examination. By writing about it without linking to it or providing its title, referring instead to “studies,” or to “a study by” the name of the lead researcher, the writer makes it a challenge for any interested reader to track down and evaluate the original report. 

Media also often acts as a distributor of academic feminist propaganda, and gatekeeper of information. An example is the media response to a 2010 release of data from a study done for the Centers for Disease Control, The National Intimate Partner and Sexual Violence Survey. Data from that survey was used to create a set of reports, summaries, and infographics. As we’ve discussed media doing during the last 2 weeks, initial reporting on the 2010 NISVS could almost be classified as copypasta, the same text being copied, pasted, and re-shared across the internet. Statistics from the 2 page summary appeared in articles across the bigger media outlets, presented with little comment and even less challenge. 

There were immediately noticeable problems with the report. It used Mary P. Koss’s survey method, a tool that has been criticized since its creation in the late 80s for using vagueness in its questions as a means of inflating rape statistics rather than gathering data on the incidence of the crime. Of note is the fact that when asked as part of her initial survey, nearly three-quarters of Koss’s respondents disagreed with her assessment of their experiences as rape, and nearly half went on to have sex again with their alleged perpetrators. Koss’s response was to remove those questions, and the survey has been used without them ever since.

The survey used Koss’s definition of rape, which was taylor-made to exclude female perpetration against male victims. This was deliberate, as Koss explained In her 1993 report, Detecting the Scope of Rape, a Reivew of Prevalence Research Method, where she wrote, “It is inappropriate to consider as a rape victim a man who engages in unwanted sexual intercourse with a woman.” Imagine the public outrage if the genders were to be reversed on that statement!

It ignored the known issue of false negatives (failure to report) in trauma reporting, which increases over time, as Alison Tieman pointed out in her article, “Manufacturing female victimhood and marginalizing vulnerable men,” where she also noted that 

  1. People’s recall of trauma deteriorates over time, leading to a higher risk of false negatives in answers to lifetime or long-term surveys measuring incidence of trauma, and
  2. Surveys on childhood sexual abuse indicate that this effect is far more pronounced in men than in women, with both sexes failing to disclose incidents documented to have occurred, but with men disclosing at about one-fourth the rate of women’s disclosure.

For the NISVS, this combination of issues means that lifetime prevalence statistics may be compromised both by inaccurate definitions and by a huge disparity in false negatives between male and female respondents, in addition to inaccuracies caused by the vagueness in Koss’s survey questions. The statistics laid out in the publications released by the CDC may be wildly inaccurate and may have wildly different implications after correction of that definition issue, yet they were widely published by big media outlets without question because they supported a narrative. Criticism of the research by the antifeminist and men’s rights activist communities was largely ignored, until it was confirmed by feminist researcher Lara Stemple, who published a report on the topic in 2014, after four years of media promotion of the original CDC report. Now that the topic can be framed as a feminist issue, some outlets have shown interest in it, though most are still reporting the CDC’s numbers without question.

Is it any wonder that we keep hearing that journalism is dead?

This Thursday on HBR Talk, we’ll be examining media manipulation that uses misrepresentation or false framing of research and other academic writing, and discussing how to navigate the information maze that dishonesty creates. The discussion streams on multiple platforms. You can tune in at 7:30PM Eastern via the above link, or find other viewing and listening options for that time or later on badgerfeed.com.

https://files.hudexchange.info/resources/documents/2019-AHAR-Part-1.pdf

https://www.homeless.org.uk/facts/homelessness-in-numbers/rough-sleeping/rough-sleeping-explore-data

https://www.cdc.gov/violenceprevention/pdf/nisvs_report2010-a.pdf

https://archive.vn/xhL97

https://avoiceformen.com/false-rape-culture/manufacturing-female-victimhood-marginalizing-vulnerable-men/

https://web.archive.org/web/20120201203258/http://www.genderratic.com/?p=716

https://avoiceformen.com/feminist-violence/if-you-see-jezebel-in-the-road-run-the-bitch-down/

https://avoiceformen.com/featured/october-is-the-fifth-annual-bash-a-violent-bitch-month/

https://web.archive.org/web/20120305030615/https://pubpages.unh.edu/~mas2/V74-gender-symmetry-with-gramham-Kevan-Method%208-.pdf

https://web.archive.org/web/20190505152108/http://theliberalists.net/2018/12/30/the-duplicity-of-feminist-researcher-mary-koss/

Become a patron to

881
Unlock 881 exclusive posts
Be part of the community
Listen anywhere
Connect via private message