A few years ago there was a paper published in Nature that mentioned something about how mixing cells with acid gave stem cells (or something to that effect). Ground-breaking stuff; published in Nature. The hallmarks of an amazing discovery. Made all of the news websites. But the moment I read up on it a bit I knew straight away that something had gone wrong. Which indeed it did. It has since been retracted and there was a major investigation into the research institute. I feel good about predicting correctly the results are bogus.
- What journal is it in?
My general rule of thumb is that the lower impact journal something is, the less likely the paper is to fraudulent (I don’t have any data to support this*). But my hypothesis is the benefit of publishing a low impact paper (on future career opportunities, funding, prestige of high impact publishing, etc) is not increased to sufficiently outweigh the risks associated with producing fraudulent papers (job loss, loss of prestige, loss of respect, etc).
- Have they used statistical methods I’ve heard of?
If I can’t figure out why they’ve used a particular statistical test I’ll probably make the assumption that the data isn’t quite as good as they would like – but need to analyse it in such a way to give a positive result; using some obscure test I’ve never heard of (or even flexing their statistical muscles) just to get publishable p-value. What I don’t mean is statistical tests I don’t know how to perform, but statistical tests that aren’t the traditional go to.
- Is the methodology comprehensible?
I find lots of methods sections of research articles dense and incomprehensible. I often feel that diagrams and flowcharts better explain what the authors did better than a few very wordy paragraphs. If I can’t figure out what you’ve done, why should I trust your work? Annoyingly this requires a bit of effort to really do.
- Conflicts of Interest
Conflict of interests aren’t necessarily a bad thing – so long as they are declared. But I often feel that all publishing scientists have undeclared conflicts of interest – preference for certain hypothesis or future funding is dependent on certain results.
But also, you can have future conflicts of interest to. For instance if you’re working on developing a product for insect control, while you may not have a financial stake in company at the point of publishing, you may intend to financially invested at a later time if the product was effective.
- Are the results too good to be true?
Finally, does the result just seem too groundbreaking, and without a logical scientific explanation.
* Since writing I’ve looked and found this: http://iai.asm.org/content/79/10/3855.full Some have suggested a “Retraction Index” to counteract this: http://retractionwatch.com/2011/08/11/is-it-time-for-a-retraction-index/.