Posted on April 8, 2014
In my previous post, we discussed the idea of sensitivity. Recall, that sensitivity is linked to those who are ill. Sensitivity is the statistic that can tell us how well a test might perform at identifying those who have a disease and test positive for the disease, out of all of those who really do, in reality, have that disease.
What can you do, clinically, with the measure of sensitivity? Let’s think back to the mathematical definition of the statistic:
Sensitivity = True Positive / [True Positive + False Negative]
Pretend a test is 100% sensitive. By the mathematical definition above, this would mean that we have a perfect ratio of 1:1, or in other words, True Positives = Everyone who has the disease. Congratulations! You’ve picked a test that would correctly identify every single individual with a disease. No one, who has the disease, would be left “undiagnosed” by the test. You have zero false negatives.
If a test has 100% sensitivity then no false negatives exist. So what do you do with a positive result? You trust it – big time. A positive result must absolutely be TRUE! It means the test you’re working with, is that good that you absolutely believe it’s a positive result. But what if the test reads negative? You believe that too – big time. Because, if all of the positives are accounted for, all of the negatives must absolutely be negative.
SNOUT it out
Clinically this matters. While no test is ever 100% sensitive, the higher the sensitivity, the lower the number of false negatives. Meaning that when we get a negative result, it must be real. It must mean that the patient, most certainly does not have the disease. So we use a test with a high sensitivity to RULE OUT disease. Or, as you’ve likely heard, “SNOUT” it out – SeNsitivity = rule OUT.
Like our officer before, wearing his sensitivity badge of honor – if he’s made rank by achieving a high sensitivity in training – when he gets to the crime scene and say no foul has been done – we believe him. And that is case closed.
What about his other badge? For more on specificity, see my next post.
For more on pre-test probabilities and post-test probabilities stay tuned for more.