When we go to school we are taught about the scientific method; Think of a premise, test it, and if the results are not satisfactory, change the premise and test it again. This has led us to an unprecedented level of innovation over the last two centuries.
Science is about reproducible evidence. If anyone with comparable means and knowledge cannot reproduce a published experiment, such experiment is probably wrong, and its publication should be dismissed unless the authors make the underlying results public and disclose all the possible (and not published) trickery. Otherwise a scientific article is nothing but paper for wrapping the fish.
The academia should be the paladin of such procedure, a temple of truth, right?
The science of hype
Once in a while I like to explore the state of the art of a subject within electrical engineering called “power flow”. Last Christmas I did so.
Lately there has been a break trough in the methods used to solve the problem. However the new method is not mature and cannot be compared yet to the traditional algorithms, but it has a great potential. So once in a while I check if the corners have been polished. This new method is a high profile subject and it is not well understood yet, so if you submit a paper about it, you’re probably going to get it published.
Ding**ding**ding**ding**! I found two winners! Two articles published in 2017 in the most prestigious electrical engineering institution (IEEE) had incorrect formulations of the problem leading to a useless implementation and a complete waste of my time.
I do read the papers on the power flow subjects, and yes I actually program the methods and I publish the source code (I have the means and the knowledge to do it). As a good scientist, I wrote to the authors asking for clarification. One of them promised me some prove of the method (never heard from it again) The other asked me for my references, after providing them along with his method corrected, and source code, I received no further reply.
Both papers had very high claims and really nice looking charts. Both failed to provide any improvement over the existing method while claiming otherwise. Both of them were from research groups from reputed universities. Both were published in a prestigious magazine. Both were a fraud.
Unfortunately, this is not the first time I spend hours verifying a published method only to find some inconsistency or simply that the methods performs poorly when announced otherwise. The truth does not sell magazines apparently.
The incentives turn scientists into hustlers
The universities measure a researchers success in terms of how many papers he/she gets published in so called “high impact magazines” over a period of time. This means that a researcher is in a position where he/she can make up an article from the beginning to the end, with no evidence whatsoever, and still get it published if it looks fine enough (right wording, plenty of famous references, familiar yet “new”, etc.) If you get caught you might lose your reputation and so on, but who does test the stuff published in papers? Well, I do.
Once I was told by a chairman that to copy from one person is to plagiarize, to copy from many is research. I was outraged by this comment. It reflects perfectly the mentality that prevales in the academia. Yet somehow it contains truth, but the words seem so wrong to me.
In academia the game is to publish, to publish a lie if necessary. That will make you pass the bureaucratic requirements and remain as a researcher and thrive.
Another bleeding subject is the authorship. Many times, the poor PhD student (the legitimate author) appears in second or third position in the article authorship, just because being the lead author brings more points to the so called researcher (the department chairman usually) The students accept in fearing of the possible retaliation. This creates a vassalage relationship that endures over time and infiltrates in the academic culture.
Sometimes, the results are legit although not great. Then the facts must be exaggerated and oversold in order to convince the magazine reviewers (also in the game) that the article is worthy of publication.
Hustlers in academia sell hype and bad science.
Separating the wheat from the chaff
Many authors hide facts and publish half way results, or simply wrong statements in order to attract attention and obtain the validation that comes with a published article. This goes against the very purpose of publishing: To lead others towards a research path or avoiding the suffering of going through an unfruitful one. Only optimistic articles seem to be publishable.
‘Novel, amazing, innovative’: positive words on the rise in science papers
My former boss in R&D used the following parable: The seed makers in The Netherlands do not keep their secrets for themselves for too long. They know that making information and seeds available to other seed makers will raise the overall quality of the seeds, improving the benefits for the guild. This illustrates perfectly how research works; Do something, get some benefit and make it available. Others will do the same and you will benefit further from your initial effort.
In my opinion, open source software is the perfect way to do science. It is open, verifiable, and if it is wrong, you can correct it easily. I have met open source researchers both in engineering and biology, and all of them are very successful (they meet their papers quotas) while publishing strictly verifiable findings.
The key is that the development of open source software has lead them to the forefront of their fields. The mastering of the basics along with an open platform where to innovate incrementally has enabled them to go one step further.
Therefore, magazines like IEEE should start asking for hard evidence of the articles published, and open source code is hard evidence in many cases where the publication involves a computer algorithm. This would increase the reputation of the publication and would mean that the published facts are the truth and nothing but the truth, raising the level of the whole field of expertise.
Certainly it would make the subscription fee worth the money.