dmurray 5 years ago

There are at least a couple of others:

Journal of Articles in Support of the Null Hypothesis (multidisciplinary, dominated by the social sciences) [0]

New Negatives In Plant Science (biology, discontinued) [1]

Journal of Negative Results (ecology) [2]

At first I thought JASNH was more an art project than a serious journal. Now I'm not so sure.

[0] http://www.jasnh.com/

[1] https://www.journals.elsevier.com/new-negatives-in-plant-sci...

[2] http://www.jnr-eeb.org/

  • vanderZwan 5 years ago

    > At first I thought JASNH was more an art project than a serious journal. Now I'm not so sure

    Perhaps it started out as such? Even if it did it represents a legitimate need, and if there is any field which could embrace an arts project and take it seriously it's the social sciences. I for one would be happy if they did!

    (Thanks for the links!)

netcan 5 years ago

Academic journals, in general, need to evolve for a lot of fields.

Besides being too closed and a bit crusty, I think journals work well for sciences and possibly also for philosophy. That's not surprising, since they evolved around these. But, for "soft sciences" (economics, psychology, etc.), I don't think the system works very well. Possibly some areas of health/medicine sciences/studies too.

First, they have a lot of problems in practice. The replication crisis is a big example, and it impacted psychology most severely. Economics has this issue, which kind of comes down to ex post hypothesizing.

There are fairly serious issues with generalizability. Even if results of some economic or behavioral study/experiment are good, can it be generalized past the narrowest terms of the experiment.

The publishing system (or, maybe "publishing" is not even the ideal frame) needs to work in a way that promotes accumulation of evidence, data and replication but the system seems to be producing sprawl.

I'm not sure what the answers are, but glad to see there's an interest in change.

Vinnl 5 years ago

The effort is admirable, but I don't see this fixing the problem. An important part of the evaluation of academics is the reputation of the journals they publish in, and that reputation is primarily driven by the "impact" of their publications, in the worst cases measured by the Impact Factor [1]. Negative results usually just don't have what is understood as impact by evaluators. This means that there is little incentive to actually publish negative results here, and that those results will likely always be disproportionately outnumbered by selective results - unless the evaluation system changes.

In other words: I don't think scarcity of publication venues is the main reason few negative results get published.

(Disclosure: I'm involved with another project that hopes to alleviate the problem. [2])

[1] https://medium.com/flockademic/the-ridiculous-number-that-ca...

[2] https://medium.com/flockademic/why-replication-studies-are-n...

  • dsr_ 5 years ago

    I think it would be reasonable for a paper to cite the prior art showing negative results when:

    - the paper is a meta-analysis, or

    - the paper shows a contradiction of prior negative results, or

    - the paper's hypothesis explains the prior negative results

    Normalization of publication of negative results is also useful as a goal of its own; consider that a hundred-person high energy physics experiment probably has room for two hundred negative-result papers along with the two winners, as long as they properly pre-register their hypotheses.

    • Vinnl 5 years ago

      That is fair, and I think you're right. That said, I think those cases are uncommon enough that it's still practically a guarantee that negative results will not be racking up citations, and hence there still being very little incentive to publish those negative results. There are some exceptions to this, of course, but I think those would get published now anyway.

      In other words: I don't think limited publication venues is the reason few negative results get published. [1]

      But absolutely, it should be normalised. I also do think preregistration would be an effective tool in making that happen, if funders start requiring it - which is a challenge in its own right.

      [1] I think this summarises my views best, so I'm adding this to my original comment as well.

  • the_duke 5 years ago

    This is nice and all, but how is a "like button" going to help this situation in any way?

    The root causes don't have to do with informal peer recognition, but a public, measurable kind of recognition, expressed in publications and citations. Citations actually already are a implicit voting system for quality work. ( in theory...)

    Encouraging journals to publish more "boring" research, and if need be, dedicated journals, can have an actual impact.

    ===

    Edit: what would be more interesting imo is similar but more extensive: a decentralized, public peer review system.

    • kd0amg 5 years ago

      Citations actually already are a implicit voting system for quality work. ( in theory...)

      I'm not sure this is even true in theory. The decision to cite something looks to me to be more about relevance than quality. Citing a paper says, "they investigated a related question," rather than "their investigation was especially rigorous/thorough."

      Encouraging journals to publish more "boring" research, and if need be, dedicated journals, can have an actual impact.

      Only if tenure committees and funding agencies consider the "boring" research sufficiently important.

    • Vinnl 5 years ago

      (For those reading along: the_duke is talking about the project I'm involved with, https://plaudit.pub .)

      I think you're right about the root cause; that's why Plaudit's endorsement data is free and open data, publicly available through CrossRef. Citations indeed are a similar implicit voting system, but they have the "problem" that they only accumulate over a larger time scale: an article first has to undergo peer review, then get published, then get read, then get used, and then people using it also need to get their results reviewed and published. We're talking several years here.

      That's why evaluators often use the journal name/Impact Factor as a proxy for "expected number of citations". With Plaudit, research can start accumulating endorsements from the moment the preprint is published, so it could serve a similar role.

      (And since Plaudit endorsements judge impact and robustness separately, articles and the journals that publish them can also get recognition for "boring" research - which is not possible using the Impact Factor.

      ===

      Luckily, there are also plenty of initiatives for alternative public peer review systems. Additionally, there are also tons of "blockchain for science" projects (caveat: most appear to consist of a single whitepaper), although it's not clear to me what science-specific problem the decentralisation solves.

      If you'd like some pointers to some specific such projects, let me know.

      • the_duke 5 years ago

        The goal is laudable.

        My main point of criticism is that it is essentially a like button that requires almost no effort. I can see the end resulting being researchers just trading "likes" without much consideration.

        It could at least require a public text that justifies your approval of the paper, with a certain minimum length (100 characters plus). This would be something like a "peer review light".

        • Vinnl 5 years ago

          Yes, this is the main potential problem. The hope is that this is combated somewhat by endorsements being completely transparent, traceable and open data, instead of the intransparant process of peer review - which has its own problem of nepotism.

  • thanatropism 5 years ago

    Part of the problem is actually in the process of being solved: the social role that Holy Immaculate Science had been acquiring.

    I know this sounds a bit reactionary in times of antivaxers and even flat-earthers, but we cannot afford to trust "Science" in the way we had been doing up to the mid 2010s. How much social policy was being enacted in the name of p-hacked psychology and social science? How much of our baseline ideologies?

    Even the first announcement of results from the LHC was botched. Now, you may say "great, this is science, it's supposed to be conjectures and refutations". But then we can't trust it as guarantee of ground truth. Scientific papers can't be used in service of internet debates, etc.

    • umvi 5 years ago

      > How much social policy was being enacted in the name of p-hacked psychology and social science? How much of our baseline ideologies?

      I'm interested in how often some interest group pays for p-hacked results so that they can use it to strengthen their position in some way?

      In my opinion, it's very hard to have good science whenever there is external money in the picture. The temptation/pressure to slightly tweak the variables or scope of the study to achieve a desired outcome for the benefactor is just too high.

      The worst part is that this allows people to strengthen their biases by cherry picking "scientific studies" that seem to agree with their position even if intuitively something doesn't seem right. And science is the highest authority you can appeal to, so there is no way to refute it short of funding your own, better study that arrives at the opposite conclusion.

    • Vinnl 5 years ago

      In what way is more scepticism towards science solving the lack of publication of negative results?

mikorym 5 years ago

This is very relevant in biology. Countless Honours and MSc students every year try to achieve "results" and probably the majority of these may have as a first result a negative result, or if you will, the empty result.

And there is nothing wrong with that.

In fact, it is very useful. Any result that either proves nothing new or doesn't prove or disprove anything strengthens our confidence in the existing data knowledge base. This should even happen at school science fairs: If a student presents to me a study about how bicarbonate of soda does nothing to help plant x with growth, I would probably rate it highly as long as the content is consistent.

It sounds stupid to students, but "proving nothing" is not nothing. It is actually another grain of rice in the scale of how we approach topics that have not yet achieved clear results. And sometimes it's more than just a grain, it may be a confirmation of old results that if you had not redone the experiment yourself would remain an abstract or somewhat removed prospect.

  • raxxorrax 5 years ago

    Agreed and additionally people seem to hold their bachelor and master theses to vigorous standards. My bachelor thesis was better than my masters, but neither are worth anything. Not topically, not didactically, not scientifically. I hope nobody will ever read them again. I still got good grades.

    I was already employed while writing both and really didn't put my attention to it. I doubt you can reasonably expect these works to be anything more. At least not if the focus isn't an academic career with further degrees.

    Maybe learning to write something formal is worth it, but the results are expected to be disillusioning from a scientific standpoint. Or maybe it is the exceptions that are the goal here.

    • newsoul2019 5 years ago

      If you ever find yourself in the presence of something novel that you need to report on, your prior training and experience will make your report or writeup more accurate, reliable, and trustworthy.

  • netcan 5 years ago

    One reason its such a problem is that many studies are observational, rather than experimental. There's a lot more room for ex post hypothesizing (first look at the data, then pose a hypothesis), which invalidates statistical confidence.

    • inlined 5 years ago

      This reminds me of a story that I unfortunately forget the citation of:

      The military wanted to conduct an experiment to see if urban or rural recruits would have an easier time matriculating. They felt the hypothesis was obvious: rural recruits would be more “rugged” and handle extreme environments better.

      The experiment showed the opposite—urban recruits were matriculating better. Suddenly the conclusion was “obvious” as well: urban recruits were more accustomed to rigid lifestyles and could handle the orders barked at them.

      The lesson is that if two opposite results could be “obvious” then neither are. Observational studies can very easily fall into this trap.

      • netcan 5 years ago

        That story is very close to something Karl Popper could have said. He would have called it pseudo-science.

  • duxup 5 years ago

    It's just like troubleshooting and debugging.

    The list of what didn't fix the problem or didn't change anything (or changed it only a little) .... can be the thing that points to what actually does.

    In that way non results are as important as anything else.

mathgenius 5 years ago

Two other places where we should be rewarding boredom: finance, and politics. There's no reason why good, important work should be correlated with excitement or interest.

whack 5 years ago

I agree with the other commenter that this is a great step forward, but it still doesn't solve the problem of academics being incentivized to publish in the most prestigious journal possible. Magazines like SURE are by design, not going to be nearly as prestigious as others which publish more "sexy" results.

It would be far better if the most prestigious journals pre-committed to publishing studies based on their methodology and hypothesis, before seeing their results.

  • mic47 5 years ago

    > "problem of academics being incentivized to publish in the most prestigious journal possible."

    Is this a problem? If you have interesting, ground-breaking, or just really surprising results, what is wrong with trying more prestigious journal? Problem is not that there are journals that take only "interesting" results, but that a lot of research is not published at all, just because results are as expected. And this journal fixes that.

    > It would be far better if the most prestigious journals pre-committed to publishing studies based on their methodology and hypothesis, before seeing their results. This is quite interesting proposal, could work well for experimental sciences, but worse for stuff like math / computer science, theoretical physics.

    • Nasrudith 5 years ago

      The problem isn't where it is published but of institution and incentives and their impact upon career and their works. If looking for spurious correlations to get sexy results is the way to get tenure you are going to see a lot more sloppy science regardless of how harshly you punish it.

      We don't (or shouldn't) trust in cases of major conflicts of interest. Being caught taking bribes to influence results would be a black list worthy move but institutions of all sorts effectively do the same thing with more indirection accidentally or otherwise.

      Regardless of aims pushes can create bad science. Behind many private testing scandal was a blind push to improve throughput and expenses with no regards to accuracy.

      I definitely don't have an answer to how to restructure it better - much less a way that is practical or able to get any political acceptance.

    • Vinnl 5 years ago

      > Problem is not that there are journals that take only "interesting" results, but that a lot of research is not published at all, just because results are as expected. And this journal fixes that.

      It does not fix that because, as GP mentioned, researchers are incentivised to publish in prestigious journals, which this journal won't become. Thus, negative results still won't get published.

      • w0m 5 years ago

        I've always thought academics need volume as well as quality publications for tenure, this looks like a landing spot for cutting-room-floor CV filler papers that would normally get ignored. Still helpful for career progression @ good universities.

    • xondono 5 years ago

      If there’s journals that only publish interesting results, these get more attention.

      Researchers trying to score points aim to be published by these.

      This gives them incentives to try new stuff, disregard boring results (like negative results) and for those inclined, to cheat.

      These tend to push the journal profile, at least until something like the replication crisis.

      It’s a positive feedback loop in all it’s glory

JoeAltmaier 5 years ago

Journals can be study-space filters for large-P-value results. Statistically a certain number of these results are by chance, and will prove irreproducible. That can mean, depending on the mean size of studies, that a good fraction of everything in a particular journal is wrong.

olalonde 5 years ago

Funny how the acronym spells out "sure" (guessing it was deliberate).

  • chris_wot 5 years ago

    Funny how no other journal has called themselves Series of Highly Impressive Trends.

    • hackerpacker 5 years ago

      funny this coming from vox

      • Symmetry 5 years ago

        Vox publishes it's share of viral clickbait but it also publishes some quite good investigative journalism. Like Sara Kilff's stuff on hospital billing. There are writers at Vox who I think are bad but also writers there good enough for me to put them in my RSS reader. There's less editorial control at Vox which means both that the bad is worse but also that the good is better. And the bad is probably what goes viral on Facebook to pay for the stuff I consume.

        • anthuman 5 years ago

          Vox is the left's version of infowars or breitbart. Why vox is allowed here but their polar opposites are not is beyond me.

          Vox and "good journalism" are not words that belong together. Vox is like most "news" today - pure biased agenda. Hopefully, in a few years time, a recession washes away all the trash rotting in the media space today.

GershwinA 5 years ago

Nice, I never thought there's such a problem to begin with, but it makes sense. Goes broader than just "boring" study results, clickbaits and controversial topics are flooding the news feed, dulls the critical thinking. Reading through something boring at first may give interesting results, learned it first hand studying Hegels dialectics :D

  • dao- 5 years ago

    > Reading through something boring at first may give interesting results, learned it first hand studying Hegels dialectics :D

    Nice. If you enjoyed that, you may want to check out Adorno's Dialectic of Enlightenment and/or Negative Dialectics.

inlined 5 years ago

Hopefully journals like this will have a huge benefit at helping meta-analysis. As the article describes, mere chance can produce p < 0.5 with enough studies. Without the null hypothesis studies being published/included, meta-analysis can suffer from huge sampling bias and draw horrible conclusions because they treat the outlier as the norm.

timwaagh 5 years ago

They should make some kind of pop-sci summary and publish it as a magazine. Serious intellectuals will subscribe and will henceforth be more fun at dinner parties.

"Did you know, dear aunt, that in 1992 a study analysed the relationship between the amount of spiders in College bedrooms and GDP"

"and what did they find? are spiders bad for the economy?"

"well absolutely nothing, of course, not even that there was no relationship, but it's nice they tried and it was a very high quality study".

alwaysanagenda 5 years ago

Now that we've got SURE for economics, let's do one for climate change.

After all, "publication bias affects every research field out there."

Vox is basically explaining fake news without any sense of irony about how this plays out in every other industry and field.

> "Let’s say hundreds of scientists are studying a topic. The ones who find counterintuitive, surprising results in their data will publish those surprising results as papers.

>The ones who find extremely standard, unsurprising results — say, “This intervention does not have any effects,” or, “There doesn’t seem to be a strong relationship between any of these variables” — will usually get rejected from journals, if they bother turning their disappointing results into a paper at all.

>That’s because journals like to publish novel results that change our understanding of the field. Null results (where the researchers didn’t find anything) or boring results (where they confirm something we already know) are much less likely to be published. And efforts to replicate other people’s papers often aren’t published, either, because journals want something new and different."

Very similar to my complaint on this issue: https://news.ycombinator.com/item?id=19432720

Oh, and look, it's sponsored by The Rockefeller Foundation, the definition of globalism writ large:

https://en.wikipedia.org/wiki/Rockefeller_Foundation#Beginni...

  • icebraining 5 years ago

    How is it similar to your complaint, other than both being about potential biases in the scientific literature? Theirs is about publication bias against null resources, yours is against people without the education background that qualifies them to run a particular study; I don't see how they're that similar.