# Statistical significance vs. clinical significance

Posted on 23rd March 2017 by Cindy Denisse Leyva De Los Rios

#### What if I told you that I conducted a studyÂ which showsÂ that a single pill can significantly reduce tiredness without any adverse effects?

Would you try it? Or recommend it? Or would you wantÂ more information to decide? Maybe youâ€™re a little skeptical. I will show you the results, so donâ€™t make a decision just yet.

#### From now on letâ€™s imagine thisÂ scenario…

Before I tell you the results of my study,Â you need to know how it was carried out.

- First I took a group of 2,000 adults between 20-30 years old, all of whom suffer from constant tiredness.Â Then the participants were randomly divided intoÂ 2 groups, with 1000Â participants in each.
- One group of participants (the intervention group) were givenÂ the new drug:
*â€œenergylinaâ€.*Â The other groups of participants (the control group)Â were givenÂ a dummy (placebo) pill. - Nobody knew â€“ neither the participantsÂ nor the researchers involved in theÂ experiment â€“ whether they were taking ‘
*energylina’*or the placebo. The participants took the pills for 3 weeks, 2 per day. - We used a scale to measure participants’Â levels of tiredness before and after the trial. ThisÂ rated fatigue onÂ a scale of 1 to 20; with 1 meaning the participant felt entirely well-rested and 20 meaning the participant felt entirely fatigued.
- The results revealed that:Â 90% of the participants in the â€œenergylinaâ€ group improved by 2 points on the scale. 80% of participants in the placebo group improved by 1 point on the scale.
- This difference between the groups wasÂ statistically significant (p < Â 0.05) meaning that, at the end of the 3 weeks, participants in the intervention group were significantly less tired than those in the control group.

#### So does that mean the treatment is effective? Should you take “energylina”? Should every doctorÂ prescribe it?

Not necessarilyâ€¦Letâ€™s make a couple of things clear first. **At this point, you might be wondering why the title of this blog isÂ â€œstatistical significance vs. clinical significanceâ€.**

Well, I will explain it right nowâ€¦ the results I gave you are there to help you make a decision. You want to know whether â€œenergylinaâ€ is effective enoughÂ to recommend to individualsÂ who suffer fatigue. Did the results convince you?

Before you answer, first let me clarify something:Â **Clinical significance is the practical importance of the treatment effect, whether it has a real, palpable, noticeable effect on daily life.** For example, imagine a safe treatment that could reduce the number of hours you suffered with flu-like symptoms from 72 hours to 10 hours.Â Would you buy it? Yes, probably! When we catch a cold, we want to feel better, as quickly as possible. So, in simple terms,Â if a treatment makes a positive and noticeable improvement toÂ a patient,Â we can call thisÂ ‘clinically significant’ (or clinically important).

In contrast, statistical significance is ruled by the p-value (and confidence intervals). When we find a difference where p <Â 0.05, we call thisÂ ‘statistically significant’. Just like our results from the above hypothetical trial. If a difference is statistically significant, it simply means it was unlikely to have occurred by chance. It doesn’t necessarily tell us about the *importance* of this difference or how meaningful it is for patients.

### SoÂ it’s important to consider that trial results could be…

**Statistically significant AND clinically important.**This is where there is an important, meaningful difference between the groups and the statistics also support this. (The flip side of this is where a difference is neither clinically nor statistically significant).**Not statistically significant BUT clinically important**. This is most likely to occur if your study is underpowered and you do not have a large enough sample size to detect a difference between groups. In this case you might fail to detect an important difference between groups.**Statistically significant BUT NOT clinically important**. This is more likely to happen the larger sample size you have. If you have enough participants, even the smallest, trivial differences between groups can become statistically significant. It’s important to remember that,Â just because a treatment is statistically significantly better than an alternative treatment,*does not necessarily meanÂ that these differences are clinically important or meaningful to patients.*

### Going back to our hypothetical study,Â what have we got: statistical significance? clinical significance, or both?

Remember we had 2 groups, with 1000 participants in each. InÂ the intervention group, 90% of the participants improved by 2 points on the tiredness scale whereasÂ 80% of the participantsÂ in the placeboÂ group improved by 1 point on the tiredness scale.

Is the difference between both groups remarkable? Would you buy my product to have aÂ slightly higher probability of achieving 1 point less on a tiredness scale, compared with taking nothing? Perhaps not. You might only be willing to take this new pill if it were to lead to a bigger, more noticeable benefit for you. For such a small improvement, it might not be worth the cost of the pill. So although the results may be statistically significant, they may not be clinically important.

### To avoid falling in the trap of thinking that because a result is statistically significant it must also be clinically important, you can look out for a few things…

- Â Look to see if the authors have specifically mentioned whether the differences they have observed are clinically important or not.
- Take into account sample size: be particularly aware that with very large sample sizes even small, unimportant differences may become statistically significant.
- Take into account effect size.Â In general,Â the larger the effect size you have, the more likely it is that difference will be meaningful to patients.

So to conclude,Â just because a treatment has been shown to lead to *statistically significant* improvements in symptoms does not necessarily mean that these improvements will be *clinically significant* (i.e. meaningful or relevant to patients).Â That’s for patients and clinicians to decide.

## 26 Comments on Statistical significance vs. clinical significance

Jesus leyvaExcelente Â¡Â¡Â¡Â¡Â¡

23rd March 2017 at 2:55 pmAntonioGood luck for your career! This article is very important for medical choises.

23rd March 2017 at 8:48 pmHÃ©ctor Keith Ovalles ÃlvarezMuy bien.

24th March 2017 at 1:41 amPeterNice article. Just one note, ‘statistically significant’ doesn’t mean that the result is unlikely to have occurred by chance. The ASA have written a nice article on interpreting p-values (http://amstat.tandfonline.com/doi/full/10.1080/00031305.2016.1154108).

24th March 2017 at 5:50 pmKalyan ReddyGreat article to read. Statisticians trying to solve the misinterpretation of statistical significance by a consensus.Amazing thing!

29th April 2017 at 5:23 pmIain ChalmersVery well explained

25th March 2017 at 6:52 amSandrsNicely explained, good piece of work.

26th March 2017 at 9:44 amIsmael KawooyaWell articulated.

28th March 2017 at 6:07 pmCarel BronOutstanding!

4th April 2017 at 7:00 pmPradnya KakodkarVery well explained. This is the same topic on which i am working. we all get carried away by the p value least realizing the importance of clinical or practical significance.

23rd April 2017 at 1:33 pmSascha Baldryreally nicely explained, thanks

25th April 2017 at 5:35 amKalyan ReddyThat’s a well explained article especially to understand the reason why drugs fail to achieve desired end point during clinical trials

29th April 2017 at 5:16 pmNilesh jadhavSignificantly explained the significant difference of trial result. ?

30th April 2017 at 4:05 pmMurray EdmundsThis is a good article that clearly describes and illustrates an important point through the use of a hypothetical, yet typical, example. In clinical trials it is very common to find differences in outcomes between interventions that reach statistical significance, yet are of small magnitude. The article raises, but does not elaborate upon, another long-recognised issue that is very important and related. Clinical trials tend to illustrate the relative performance of interventions in populations, not individuals. The data are usually presented as mean values with an index of variability (SD, SE, CV) for end-of-trial absolute or between-treatment differences in predefined endpoints. But most readers (myself included) do not intuitively take account of the spread of the data and instead tend to perceive the relative â€˜effectsâ€™ of the interventions tested in terms of the mean data. We mistakenly perceive the mean effect as the effect.

2nd May 2017 at 11:03 amIt is therefore possible that small-but-significant differences in the overall mean values disguise much larger clinically valuable effects in limited subgroups. Taking the articleâ€™s case history, for example, it is possible that the small relative improvement in alertness score in the overall study that was observed with Energylina versus placebo was entirely due to an improvement of much greater magnitude (and clinical relevance) in a small subgroup of individuals who had a high baseline level of tiredness. The chances are the original study would not be powered to show a statistically significant difference for this subgroup, but post hoc subgroup analyses could nevertheless inform the direction to take with future studies.

In terms of marketer of the intervention, this possibility poses a dilemma: The identification of the subgroup(s) in which the intervention is really advantageous effectively niches the product. Does the marketeer prefer a narrative that describes a small benefit for the many, or a large benefit for the few? Clearly, prescribers, regulators, payers and patients will ultimately benefit from tailored intervention informed by subgroup analysis.

RossThank you! I’m a Consultant Surgeon and ALWAYS find stats a challenge. this simple explanation really helps

26th May 2017 at 5:18 pmDevendra tandaleI am not agree with all aurguments presented here. What I know is that increase in sample size doesn’t provide significance if there is no effect. Whenever, statistical significance and clinical or scientific significance are not equivalent then you need to assess your study or experimental settings for scientific validations again.

6th June 2017 at 4:18 pmYou need to know the concept “asymptotic”.the concept belongs to derivatives, a rate of change problem and useful to understand increase in sample size and significance. Increasing sample size can not convert non-significance into significance.

And what you said in you article like cost and other thing that are not included in your study or experiment. At what cost costomers might buy the drug need statistical study but it will be more a business problem.

Francis EzehA good article. Also great comments. The issue of statistical significance and clinical significance has generated a lot constructive arguments at different level of biomedical researches, as we can also see here. But the fact is that statistical significance cannot be wholly accepted as clinical significance. You can agree with me that statistical significance is a necessary condition but not a sufficient condition for clinical significance.

17th October 2017 at 5:48 pmErick HedimaWell said!

17th January 2018 at 7:18 amThis will really help in decision making.

TrajanoEvidence-based medicine is the new god. Nothing replaces common sense and logic … or does it? I agree with what is clinically relevant, it makes sense. Thank you!

1st February 2018 at 2:04 pmProf Noha GhallabI am really impressed how accurately you explained such a difficult & crucial topic. Being a 3rd year medical student & to write this blog in a well constructed evidence-based manner is overwhelming..I wish you all the best hoping one day you can be able to join the Cochrane group & conduct clinical trials that will make a difference in the medical field.

24th February 2018 at 4:10 pmSueMany thanks Cindy, really well explained.

8th July 2018 at 8:29 pmGillianReally well explained to an old student and patient …. Thank you.

10th July 2018 at 10:16 amMario TristanVery good however it is important to clarify how the effect size is observed or estimated .

14th October 2018 at 7:11 pmAlpana KulkarniThis was an amazing explanation and I understand the terms in a much better way! Thank you.

23rd January 2019 at 11:52 amOby chilakaVery impressive and straight to the point. Thanks a lot

15th August 2019 at 4:16 amAdrianaHello CINDY

Excellent way to explain a very confusing concept. This ability of making challenging subjects , easy to understand will certainly make you a much in demand and popular professor in the future . Stay on your journey and you will be an amazing physician. Blessings to you and congratulations !!!

Bendiciones & muchos exitos

15th September 2019 at 1:50 am