It seems like every few months, there’s some kind of news about problems with the scientific publishing industry. Why does this keep happening? And what can be done to fix the system?
noimssays...

The thing he doesn't talk about that springs to mind is the idea of registering your experiment before conducting it. I know Ben Goldacre is big on this.

The idea is that if you register your experiment beforehand, then you have to publish your results, positive or negative. This reduces both the publication bias and the scattershot approach often attributed to 'big pharma'.

siftbotsays...

Promoting this video and sending it back into the queue for one more try; last queued Wednesday, June 29th, 2016 8:46pm PDT - promote requested by artician.

SDGundamXsays...

Theoretically, science works great. However, as has already been noted, in the real world in certain fields, the pressure to publish something "substantial" combined with the inability to get grants for certain experiments because they aren't "trendy" right now causes scientists to self-limit the kinds of research they undertake, which is not at all great for increasing human knowledge.

Another problem is the "expert opinion" problem--when someone with little reputation in the field finds something that directly contradicts the "experts" in the field, they often face ridicule. The most famous recent case of this was 2011 Nobel Prize winner Dan Shechtman, who discovered a new type of crystal structure that was theoretically impossible in 1982 and was roundly criticized and ridiculed for it until a separate group of researchers many years later actually replicated his experiment and realized he had been right all along. This web page lists several more examples of scientists whose breakthrough research was ignored because it didn't match the "expert consensus" of the period.

Finally, in the humanities at least, one of the biggest problems in research that uses a quantitative approach (i.e. statistics) is that researchers apply a statistical method to their data, such a as a t-test, without actually demonstrating that whatever being studied follows a normal distribution (i.e bell curve). Many statistical tests are only accurate if what is being studied is normally distributed, yet I've seen a fair share of papers published in respected journals that apply these tests to objects of study that are quite unlikely to be normally distributed, which makes their claims of being "statistically significant" quite suspect.

There are other statistical methods (non-parametric) that you can use on data that is not normally distributed but generally speaking a test of significance on data taken from a normally distributed pool is going to be more reliable. As is noted in this video, the reason these kinds of mistakes slip through into the peer-reviewed journals is that sometimes the reviewers are not nearly as well-trained in statistical analysis as they are in other methodologies.

dannym3141says...

You can find examples of that throughout history, I think it's how science has always worked. You can sum it up with the saying 'extraordinary claims require extraordinary evidence' - when something has been so reliable and proven to work, are you likely to believe the first, second or even 10th person who comes along saying otherwise?

If you are revolutionary, you go against the grain and others will criticise you for daring to be different - as did so many geniuses in all kinds of different fields.

I think that's completely fair, because whilst it sometimes puts the brakes on breakthroughs because of mob mentality, it also puts the brakes on spurious bullshit. I'd prefer every paper be judged entirely on merit, but I have to accept the nature of people and go with something workable.

SDGundamXsaid:

Another problem is the "expert opinion" problem--when someone with little reputation in the field finds something that directly contradicts the "experts" in the field, they often face ridicule. The most famous recent case of this was 2011 Nobel Prize winner Dan Shechtman, who discovered a new type of crystal structure that was theoretically impossible in 1982 and was roundly criticized and ridiculed for it until a separate group of researchers many years later actually replicated his experiment and realized he had been right all along. This web page lists several more examples of scientists whose breakthrough research was ignored because it didn't match the "expert consensus" of the period.

SDGundamXsays...

Science "works" when scientists bother to actually try to replicate claims, no matter how bizarre they may be. And as this video and my comment shows, that's not happening in a number of scientific fields. Which is really, really bad for human knowledge and society in general, as billions of dollars and countless work-hours get wasted since researchers base future research on what turn out to be unreliable past claims.

The "extraordinary claims require extraordinary evidence" flies in the face of everything the scientific method espouses. Evidence is evidence. It is not supposed to matter who finds the evidence. Someone who is famous in the field should not be given more benefit of the doubt than someone who is not, yet that is exactly what happened in Shectman's case. He was removed from his lab and an actual expert in the field, Linus Pauling, verbally abused him for literally decades.

That's not how science is supposed to work at all. If someone finds evidence of something that contradicts current theory, you're supposed to look at their methodology for flaws. If you can't find any flaws, then the scientific method demands you attempt to replicate the experiment to validate it. You're not supposed to dismiss evidence out of hand because the person who found it isn't a leading expert in the field. In Shectman's case, other labs replicated his results and the "experts" still wouldn't budge... to this day in fact Pauling refuses to admit he was wrong.

Conversely, there are too many papers out there now with shoddy methodology that shouldn't even be published, yet because the author is a name in the field they somehow make it into top-tier journals and get cited constantly despite the dubious nature of the research. Again, that's not how science is supposed to work.

"Spurious bullshit," as you called it, is not being weeded out. Rather it is being foisted on others as "fact" because Dr. XYZ who is renowned in the field did the experiment and no one looked closely enough at it or bothered to try to replicate it. The spurious bullshit should be getting weeded out by actual scientific testing (like the studies in the video that were found to be unreliable) and not by mob mentality.

dannym3141said:

You can find examples of that throughout history, I think it's how science has always worked. You can sum it up with the saying 'extraordinary claims require extraordinary evidence' - when something has been so reliable and proven to work, are you likely to believe the first, second or even 10th person who comes along saying otherwise?

If you are revolutionary, you go against the grain and others will criticise you for daring to be different - as did so many geniuses in all kinds of different fields.

I think that's completely fair, because whilst it sometimes puts the brakes on breakthroughs because of mob mentality, it also puts the brakes on spurious bullshit. I'd prefer every paper be judged entirely on merit, but I have to accept the nature of people and go with something workable.

Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists




notify when someone comments
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
  
Learn More