Articles

FAQs - Testing > What is the difference between sensitivity, specificity, and predictive value of a test?

What is the difference between sensitivity, specificity, and predictive value of a test?

posted on 1:09 AM, July 15, 2020

Even if tests appear “pretty good” in terms of sensitivity and specificity, the accuracy when applied to a population who may or may not have been infected is very dependent on the prevalence of the disease in the tested population. As detailed below, the predictive value of a test may not be very good, even if the test is very “accurate” (high specificity and sensitivity), when it is applied to a low-risk population. These are the two major reasons (adequacy of testing of “negative specimens” – test result specificity; and interpretation of positive results in a low prevalence situation) that “antibody tests” have been delayed in their release and the source of concern as they have now been rapidly released under the FDA’s Emergency Use Authorization (EUA). An upcoming ACMT webinar will explore this issue in more detail.

As an example, even if a test is 99% accurate in those with or without the disease (99% sensitivity, 99% specificity), positive test results can be very misleading when there is a low prevalence of infection. In an area with 1% of the population infected/exposed (left-hand 2x2 table), a negative test is both very likely and likely to be a “true negative”, indicating non-exposed. However, it is just as likely that a positive test (in isolation) is a false positive as it is a true positive. As shown, it isn’t until the disease is very widespread (in the example on the right, 10% of the population) that a positive test becomes additionally helpful in any given person (again, this is looking at the test result in isolation from any clinical symptoms or other features that suggest exposure):

 

COVID-19_Webinar_FAQs/Testing_Q2