Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it even clear that it would be worth it? Or do you propose a drastic reduction in papers to go with it, i.e., only things valuable enough to have someone replicate it are published?


If research is not worth replicating, then how could it be anything but a useless contribution meant only to bolster someone’s publish or perish career?

It just seems that the entire scientific publishing industry is there to support a jobs and prestige program. Any science is just a side effect that somehow justifies the whole racket, from NSF budget to postdoc dinner table.


Not strictly true: it only needs to matter not enough at the time. But, yes, arguably a fair amount of publications are largely irrelevant beyond a few people.

Societies fund many things for many reasons, not sure science is worth singling out here.


I don't mind the funding, but inclusion in a journal dresses it up with the blessing of a Scientific Result if the news ever cite it.

We could have a separate kind of journal for not-yet-reproduced results, and somehow ensure that the prestige is zero (or equivalent to just posting the study on a blog).


We could, but I doubt it would stop the news from picking stuff up. They often go with press releases anyway, not a journal citation. You would need incentives on the news side then to stop that.


Reproducibility in some fields are difficult/impossible for a lot of reasons (funding, candidates, etc).

Suppressing publishing at the same time as actively trashing the ability to even study seems like a recipe for disaster way above publishing bad papers to me.


> Reproducibility in some fields are difficult/impossible for a lot of reasons (funding, candidates, etc)

It's not impossible if we're actually interested in the truth.

> Suppressing publishing at the same time as actively trashing the ability to even study seems like a recipe for disaster way above publishing bad papers to me.

It's not clear that a study that cannot be replicated is worth the paper its printed on in the current environment where replication rates are 50% or less across the board. Other strategies that change how we approach individual studies are not as onerous (preregistration, open data), so maybe in that world individual studies would be worth it, but even in that world replication is the only sure fire way to validate results.


Not being replicated isn't the same as not being replicable. There might simply be no interest at the time. But that is no the same as the results not being right or even valuable at some point.

Replication is not a be all and end all, because things could replicate even though understanding is wrong (i.e., observation is right in the publication but everything else is wrong). Even the replication itself can be wrong, if certain materials share on unknown contaminant, for example. It addresses some issues, but not all.

For some areas it also not clear what replication means. What is it for theology, for example?


> But that is no the same as the results not being right or even valuable at some point.

I can't imagine anyone who truly understands the current replication failure rates thinking that a single non-replicated study is valuable for anything other than informing what replications should be attempted.

Just think about it: replication rate is generally less than 50%. That means you'll have a better chance of determining the truth on any question posed by a study by flipping a coin than by actually reading the study.

> Replication is not a be all and end all, because things could replicate even though understanding is wrong (i.e., observation is right in the publication but everything else is wrong).

The observations are the only things that we have to get right. Interpretations can be subject to decades of debate (some QM debates are ongoing a century later), but if the data isn't reliable then you're just wasting time debating falsehoods. Replications are critical to ensuring we have reliable empirical data.

> Even the replication itself can be wrong, if certain materials share on unknown contaminant, for example.

Indeed, and literally the only way to figure out that such variables exist and are affecting the results are by independent replications. The more replications, the better. One replication is a bare minimum threshold to demonstrate that the process of gathering the data is at least repeatable, in principle.

> For some areas it also not clear what replication means. What is it for theology, for example?

It makes perfect sense in any context where you're gathering empirical data. For instance, if you're surveying people's interpretation of "free will" [1], then the process via which you probe their views should be replicable. This means being clear about the specific phrasing of the questions asked, the environment in which they were asked, the makeup of the cohort, and so on.

[1] https://www.researchgate.net/publication/274892120_Why_Compa...


This is just back to the old problem: most research is irrelevant to anyone beyond a few researchers and largely inconsequential to the world. This means, there will be no money for replication for most things.

Anything truly critical will (eventually) go through some replication/control of sorts (but it can take a long time).

You can either shut down most of research and then place your bets on what to keep and replicate, or you run broad but with a lot of incorrect stuff in it.

If you go for the former, you run the risk that you keep the wrong things, though. You have to have a way to quantify the direct and indirect costs of all the bad research and see if that trades off vs a much smaller research surface. Not sure if that is the case - empirical data matters much less for a lot of big decisions than people often make it out.


> This is just back to the old problem: most research is irrelevant to anyone beyond a few researchers and largely inconsequential to the world. This means, there will be no money for replication for most things.

If it's inconsequential, then wouldn't that money be better spent on replications or other research that is consequential? I'm not really clear on what you're suggesting. Although maybe I wasn't really clear on what I've been suggesting.

Edit: to clarify, there are multiple ways to reorganize research. Consider an approach similar to physics, where there's an informal division between theoreticians and experimentalists. What if we have two different kinds of publications in social sciences, one that's proposing and/or refining experimental designs to correct possible sources of bias, and another type of publication that is publishing the results of conducting experiments that have been proposed. The experimentalists simply read proposals and apply for grants to conduct experiments, and multiple groups can do so completely independently. Conducting the experiment must strictly adhere to the proposed experimental design, no deviations can be permitted as is so common in social science when they find uninteresting results, otherwise this breaks the reliability of the results. A proposal should probably undergo a few rounds of refinement before experimentalists should feel confident in conducting the experiment, but I think the overall approach could work.


> I can't imagine anyone who truly understands the current replication failure rates thinking that a single non-replicated study is valuable for anything other than informing what replications should be attempted.

Sounds like a good idea to have a system of academic publishing that incentivises people to produce replications and similar studies then (and an academic norm of quoting multiple studies that support or oppose hypotheses)

And all that making any research that involves novel research unpublishable until someone else decides to dedicate their time to replicating the experiment from your little known working paper would achieve would be limiting incentives to experiment, especially in fields where it's perfectly possible to publish with statistical reexaminations of existing data (often flawed in other ways) instead.


> I can't imagine anyone who truly understands the current replication failure rates thinking that a single non-replicated study is valuable for anything other than informing what replications should be attempted.

I work in biology. At a panel of biology startup founders I heard one mention that she got a lot of her research ideas from papers studying bacteria which were published nearly a hundred years ago.

In biology you first seek to extend published results. Only if the extension attempt fails would you spend effort trying to replicate it (assuming you just don't abandon the pathway entirely).


And that's why 50% of results in biology fail to replicate. I personally don't find that acceptable. Both of those options should be valued roughly equally IMO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: