Null results in scientific research are kind of like non-white people in Hollywood or women wearing clothes above a size 4. It’s true they’re the majority, but you wouldn’t know that from looking at big publications.
Here’s the thing about research: a lot of the time, you just don’t find anything new. If you were to follow the scientific method as impartially as the diagram in on your second grade classroom wall suggested, that’s a perfectly valid outcome. But when it comes to publication, papers with no significant results are more often tossed in favor of more striking, significant data with sexy little p-values* to flash around.
Plus, even if null results make it into a journal, it’s unlikely that normal people will pay attention to them anyway. Have you ever clicked on a trending headline that said “Scientists ran an experiment–and you’ll totally expect what happened next!”?
Null results may not seem exciting or “groundbreaking,” but leaving them out makes for a seriously warped knowledge base. If you try enough times, you’ll get the occasional significant result showing that eating gummy bears prevents earthquakes (about one in 20 times, at a p ≤ 0.05 significance level). And if only those flukes get published, any literature review or meta-analysis is gonna conclude that California residents should adopt a gummy-bear-based diet (which also happens to be the theme of next week’s Dr. Oz show).
There have been efforts in some fields to counteract that bias with null results sections and registries, but that still doesn’t sound cool enough to catch on.
So maybe what we need is a super trendy nulls-only journal–any discipline is fine, but exclusively plus-sized p-values. And if you think that it won’t catch the attention of the science media machine, no need to worry: I’ve got some of flashy pop-science headlines to bring the hype:
*No worries if Stats class was your nap time during high school senior year. In theory, a p-value is basically the probability of seeing results as extreme as yours just by chance, if there is no real effect at the population level. In common practice, it’s something that researchers really want to be small (sometimes through statistically questionable means).