With the coronavirus exploding across the world, countries such as India and the USA are shouldering the burden of some of the highest number of cases globally. In this situation, many argue that an essential component of managing the pandemic lies in frequent rapid and routine testing of large numbers of the population – including those who are asymptomatic.

While the early days of the pandemic called for any type of test to help detect which patients were infected with the virus, we’re now faced with a completely new dilemma – we now have a variety of tests, some which interpret the virus and others its’ antibodies. However, interpreting these tests has become increasingly confusing to patients and clinicians alike. Public discussion of this issue has brought forth increased scrutiny of the accuracy of COVID-19 tests, more specifically those tests whose results are revealed in a few minutes.

What isn’t explained though is the true meaning of testing accuracy and how this aids clinical decision making. It is perhaps futile to make assumptions based on just discussions and without understanding the true meaning of these concepts. While we are on the subject of accuracy, an important focal point lies in this concept: even imperfect test results are essential when it comes to controlling the pandemic.

How so?

With a new disease like COVID-19 and all the uncertainties it brings, there is intense interest in nailing down parameters which would essentially ‘make sense’. However, we must understand that our best bet lies in the fluidity of the situation. Values may vary across populations. To grasp this concept, we need to dive headfirst into statistics.

In our current environment, Bayes’ Theorem helps us make sense of this ever-changing situation. It shows us that the usefulness of a test depends not just on how accurate it is but also on how likely someone is to have the condition it is testing for. Let’s further understand this concept by defining a few essential terms:

**Sensitivity**is the likelihood that a test will detect the target. This implies that a highly sensitive test will have a low false negative rate.**Specificity**is the likelihood that the test won’t be confused by something other than the intended target. A test with high specificity will have a low false positive rate.

Often, both these traits are in tension. Making a test more specific can make it less sensitive and vice versa and it’s almost impossible to make a test that’s perfectly specific and sensitive at the same time. Simple, right? Now let’s add more factors to this mix: the **positive predictive value (PPV)**and the **negative predictive value (NPV)**.

- The PPV depicts how many people actually have the disease out of the total sum of people who have received a positive result.
- The NPV depicts how many people do not have the disease out of the total sum of people who have received a negative result.

These indicators are not just dependent on the characteristics of the test (the sensitivity and specificity) but also on how likely someone is to have the disease in the first place, known as the **pretest probability**.

Now, let’s put together all the concepts with two examples. In the first one, we’ll consider a low pretest probability while the second will have a high pretest probability. For both these examples, let’s consider the sensitivity of a rapid point-of-care COVID-19 test at 80%, with a specificity of 100%, fairly reasonable estimates for these sorts of tests.

In example 1, let’s consider 1000 people who are completely asymptomatic are tested for the virus. (This kind of testing is essential as even people with no symptoms are known to transmit the virus, especially if they become sick soon after.) Doing the math using Bayes’ Theorem, we find that the positive predictive value remains at 100% (all 8 people with positive rapid tests will have the virus). Of the 992 with negative tests, 990 of them will not have COVID-19 and two of them will, which indicates that the negative predictive value is 99.8%. What this simply means is that if you’ve had a COVID-19 rapid test and have been tested negative, you have a 2 in 1000 chance of actually having and transmitting the virus – pretty positive results when it comes to considering decisions like attending school or going back to work.

In example 2 where we’re considering a high pretest probability of 30% and 1000 patients with positive symptoms of coronavirus, we do the math using Bayes’ Theorem once again and find that the positive predictive value remains at 100% (240 out of 240 with a positive test), but the negative predictive value has dropped to 92% (700 of the 760 with a negative test). While 8% sensitivity is quite a high number, this can be countered by repeated testing within a span of 24 hours using more accurate tests available.

Now that we’ve seen both scenarios, its time to understand the implications:

- Even if we consider a rapid, inexpensive COVID-19 test with a lower sensitivity of 80%, it would still be an advance in our efforts to control the pandemic. Moreover, those who are most infectious have a high amount of virus and are highly unlikely to have a falsely negative rapid test. This can help with factors such as deciding whether to go back to school or work.
- Such tests work best when the rate of community transmission is low. Remember, a low pre-test probability means a few cases will be missed. It’s also why phased re-openings, social distancing norms, the use of masks and contact tracing are highly recommended.
- Individuals with symptoms of COVID-19 should be given the most sensitive tests available. This includes being subject to multiple types of tests at varied time periods.

While both situations aren’t perfect, they’re better than avoiding testing asymptomatic people altogether due to concerns regarding accuracy. Despite its limitations, testing continues to remain one of the most valuable tools for controlling the pandemic.