Does Peer Review Mean “True”? or Is Peer Review Bulls**t?

If you have ever had another practitioner ask you “is it evidenced based” about a treatment or test you have mentioned, understand this.

It is the calling card, the mating call, of the borderline clinically inept and/or terminally insecure practitioner.

What they really mean is, I don’t like what you are suggesting and/or feel threatened in some way.

Thus, they will attack you with the socially acceptable stick of evidence-based medicine or EBM for short.

In theory, EBM is a brilliant idea. It is defined as:

“the conscientious, explicit and judicious use of current best evidence in making decisions about the care of the individual patient”

The aim of EBM is to integrate the experience of the clinician, the values of the patient and the best scientific information to guide decision making about clinical management.

It is meant to be a patient-centered experience.

Sadly, what passes for EBM these days is really statistics-based medicine.

And it is far from patient centered.

As far as informed consent goes, they vaguely give consent to taking meds, but it is far from informed.

When it comes to statistics, we are playing a game that has often lost touch with reality.

From the clinically meaningless “relative risk reductions” to changing end points, to data dredging, the pharmaceutical industry has thrown ethics out of the company window a long time ago.

Even the founder of EBM, David Sackett, knew by 2003 that the world of research and indeed medicine was corrupted by the drug industry.

He wrote a spoof article for the BMJ – How to Achieve positive Results without Actually lying to Overcome the Truth – HARLOT

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300797/pdf/32701442.pdf

Think about that for a moment.

The “father” of EBM, 19 years ago, knew that a great idea had been hijacked by the pharmaceutical industry and corrupted the medical profession.

Sackett died in 2015, I wonder what he would think about the state of medicine and healthcare in 2022 after the pandemic?

These days, clinicians’ experience and the patients’ values and opinions are seen as superfluous to requirements.

Just follow the flow chart or else.

These are part of guidance, but are they guides or more like rules?

One of the reasons people believe the evidence so much is that the articles are peer reviewed.

This is also another stick to hit people with if they are using research not reviewed by a peer.

There is no doubt that the peer review process does find some flaws in data and helps to keep some of the overtly dodgy research out of the publishing domain.

It should work like this:

BUT, and it is a big but, it is also very prone to missing issues too.

Thus, while peer reviewing is useful, it in no way means the study, its data, the methods or conclusions can be trusted without any further scrutiny.

When practitioners are having a p***ing contest on social media, they love to quote studies to each other.

The reality is most people do not read them, at least not in full.

They might read the title and at a push the conclusions, maybe the results, at a big push the methods, but the whole thing, rarely, very rarely.

I get it, as someone that does read a lot of research, much of it is badly written, with heavy dense technical details.

It is hard work.

Some are deliberately misleading and badly written.

Some are just simply false.

Dr John Bohannon PhD wrote up an experiment that was 100% false, even the type of lichen he used didn’t exist, and he used a false name and university.

The paper was littered with obvious methodological issues, contradictions, blatant misrepresentation of the data and was fatally flawed.

Despite this, over a period of ten months, he sent out 304 versions of the paper and over half the journals accepted it for publication.

It never ceases to amaze me how the full methods sections include tests/outcomes to be measured that may never appear in the full results section, or if they do they are a small line, while other results are in huge graphs, which, let’s face it we all love.

Now why wouldn’t this be picked up by peer review? Isn’t that the point?

Well, most people do it for free.

Let’s be realistic, while some papers get a full review, others it might be a skim or less.

The reviewers are doing it for free and have their own papers to write.

Conflict of interest is also a huge issue.

In a large review of studies on consumption of sugary fizzy drinks and obesity, 26 found no association while 34 did.

Of the 26 showing no association, 25 were paid for by the drinks industry.

Of the 34 showing association with obesity, only 1 was industry funded.

The reality is, we should certainly be informed by the evidence.

But being a slave to it, leads to healthcare tyranny.

In our world, stick to measuring something meaningful like range of motion, quality of motion, muscle strength and pain, then treat, re-test and repeat.

If you get significant and sustainable results with the care you are delivering, keep at it.

Every treatment is bespoke, sure there are patterns but it is all personalised to some extent.

Stay humble and stay hungry for knowledge (and results).

Remember ego is the enemy.