Test Processing Times
University of Illinois, Fall 2020
New data on the time needed to process each test during the fall semester shows that test processing took substantially longer all semester than the 6 hours that the University claimed it would. The University blamed non-compliant students for the surge in cases during the opening weeks of the fall semester, but test processing times likely were a bigger factor.
The University claimed that it could process 10k or 20k tests each day, with results returned in under 6 hours (see this, this, this, and this). Those assumptions were built into the model prediction of 700 total cases that was used to justify reopening. Consistent with investigative reports, the new data show that the University was not able to process tests as efficiently or in the volume that they claimed they could. And, slow test processing meant that hundreds of students were infectious but unaware of it during the opening weeks of the semester.
Summary of Findings
In this section, I provide summary statistics about processing time over the whole semester and during the first few weeks when cases spiked. These include the median processing time and the proportions of tests processed in under the promised 6 hours.
- 34.7% - Tests processed in less than the promised 6 hours (that is, 65.3% took longer than 6 hours)
- 0 - Number of days during the fall semester for which a majority of test results were returned in under 6 hours
- 11.2 hours - Median test processing time across the entire fall semester (August 17 - November 22)
First 3 Weeks of Class (the surge)
- 1% - Tests processed in under 6 hours
- 57.2% - Tests taking more than 24 hours for processing
- 25.9 hours - Median test processing time
- Hundreds - Students who had tested positive but did not learn their result until 1-3 days later (see graph)
- They likely were at peak infectiousness without knowing they were infected (3-6 days after exposure).
- They were more likely to be students who engaged in riskier activities, and likely continued to do so while infectius and unaware of it
- These estimates are conservative because they don’t consider additional delays in checking notifications or initiating contact tracing
Evidence that the University lacked sufficient test processing capacity
The University repeatedly claimed to be able to process 10k or more tests each day with results in under 5-6 hours, but they never had that capacity (see the overview). The data show that processing time increased with the numbers of tests conducted on a day, and it increased over the course of each day as the number of tests exceeded the hourly processing capacity.
- More daily tests were associated with longer test processing times
- r = 0.34 - Correlation between number of tests and median test processing time
- Tests were procesed faster on weekends when there were fewer tests
- 12 hours - Median test processing time on weekdays across the whole term
- 7.5 hours - Median test processing time on weekdays, when testing was lower
- Tests taken in the morning had shorter test processing times than tests taken in the afternoon
- 9.4 hours - Median test processing time for morning tests across the whole term
- 13.3 hours - Median test processing time for afternoon tests across the whole term
- The University fell behind in processing over the course of each day
Median Test Processing Time by Date
The graph below plots the median test processing time (in hours) by date (see the “data notes” section for details). During the first few weeks of the semester, test processing time increased substantially, peaking at a median per day of more than 45 hours. For half of the people testing on those days, test processing took more than 45 hours. For example, more than half of the people testing during the day on Friday September 4 were not notified of their results until Sunday at the earliest. After the first few weeks of classes, test processing times improved, possibly due to reduced testing volume. However, not a single day during the semester had a median test processing time under 6 hours. The University had claimed that they could process 10,000+ tests in under 5-6 hours. They could not.
Median Test Processing Time by Day of Week
The graph below shows the median test processing time as a function of the day of the week (combined across the entire semester). Testing volume was highest on weekdays, and the median test processing time across the entire term was over 10 hours. This graph averages over the entire day, but as the next graph shows, time of day had a big effect as well.
Median Test Processing Time by Time of Day
The graph below shows the median test processing time as a function of the time of day when the test was taken (combined across the entire semester, looking only at the standard 8am-6pm testing times). As more tests came in over the course of the day, processing times got longer. The University was better able to keep with early morning tests (fewer tests), but most people tested between 10am and 3pm, and processing fell behind as the tests accumulated each day. The more tests that were conducted in a day, the more processing became slower over the course of the day (r = 0.43 - this correlation is between the number of tests for a day and the strength of the association between time of day and processing time). This graph and the day-of-week effects above show that test processing time was longer whenever the number of tests was higher.
Percent of Tests Processed in Under 6 Hours
The graph below shows the percentage of tests on each date that were processed in under 6 hours. On no date all semester was the University able to process a majority of tests within the promised 6-hour time frame. For some days during the first two weeks of classes, the University was unable to process any tests that quickly (the percentage was zero or rounded to zero). On weekdays throughout the semester, fewer than 10% of tests were processed in under 6 hours. Even on weekends, when testing was lighter, fewer than a third of test results were processed in less than 6 hours.
Positive but unaware of it
The graph below shows an estimate of the number of people (on each date during the first 4 weeks of class) who were infectious and didn’t know it yet because of long test processing times. For example, on September 5, 107 people who had submitted a positive test (on Thursday, Friday, or Saturday) did not know their result on Saturday. More than half of the tests taken on Friday 9/4 and nearly 90% of those taken on Saturday 9/5 were not processed until Sunday or later.
The graph shows that hundreds of students were infected, but because of long test processing times, they didn’t know it. Students who had submitted a positive test on Friday or Saturday might have partied on both nights without knowing that they were highly infectious. Those students wre not violating isolation – they didn’t know they needed to isolate. And, the vast majority of them would have isolated themselves had they known their results.
These results show that long test processing times were an order of magnitude more of a problem than the small number of deliberately non-compliant students whom the University claimed were entirely responsible for the surge.
Important notes about the data
- The data were provided by Technology Services at University of Illinois
- The data includes tests taken between August 17, 2020 and December 1, 2020
- The data file included three columns
- orderDate: The date and time when a person signed in to provide a saliva sample
- notifyDate: The date and time when the test results were sent via email or push notification.
- duration: The difference between those times in hours. Duration codes the soonest point at which a person could have learned of their test result. If the person did not check their result right away after notification, the time difference between testing and knowledge of the result would be longer than duration.
- orderDate is always in Central Time, but notifyDate was in Eastern Time before September 23 and Central Time thereafter. The duration column corrected for this discrepancy. In my own analyses, I adjusted the notifyDate for tests returned before September 23 so that all times are central, and recomputed duration to verify that it matched (to within rounding error).
- The dataset has one row for each test, but includes only those tests for which both an orderDate and a resultDate were available. Some tests apparently were missing an orderDate (due to logistics and database issues). The orderDates were based on system logs from the check-in computers, and there were issues with recording/alerting for new check-ins. Those results still were part of the McKinley system (where the notifyDate comes from), but without an orderDate, they provide no information about duration, so they were not in this data set.
- The dataset includes negative and positive results, as well as invalid and inconclusive ones. It does not identify which tests correspond to each category. The University of Illinois dashboard includes only positive and negative tests and does not include invalid or inconclusive ones. It has no information about the timing of individual tests. According to correspondence with Technology Services, since September 22, the median number of inconclusive/invalid tests has been about 90/day, with a maximum of 298 on October 6. Given data management issues, it’s unclear how many inconclusive/invalid tests happened per day prior to September 22, but those numbers apparently were larger (a lot of inconclusive tests early on due to eating/drinking right before the test).
- The dataset does not include results of tests conducted by the DIA for the football and basketball teams (about 1500 tests/week starting in about October). DIA runs their own sample collection, so the SHIELD team does not have accurate information about actual test turnaround times. The data file does include tests by athletes that were collected at a SHIELD testing site. -The University is finishing a new data management system that should provide cleaner information for the spring. The dashboard data come directly from McKinley’s data, so it’s the authoritative source on the numbers of positive and negative tests.
- There are discrepancies between the number of tests reported on the dashboard and the number of tests in the test timing dataset For orderDates after September 23, the dashboard consistently reported more tests than are included in the timing file. Prior to September 23, some days show more tests on the dashboard and some show more in the timing data. The data file has additional inconclusive tests, which would lead to more tests than the dashboard. But, the data file is also missing tests that lacked an orderDate, which could lead to fewer tests than the dashboard. Those factors likely traded off prior to September 23 (more inconclusive results during that period), leading to some days having more tests listed on the dashboard and others having more tests in timing data. After September 23, there were fewer inconclusive results, so the dashboard typically had more tests because of the roughly 10% of tests that were excluded from the timing data because they lacked an orderDate (and absent from the timing data).
- For each test in the test timing data set, I calculated the difference between the time a person checked into the testing center and the time when they were notified that their result was available (in hours, including partial hours with no rounding). Note that these test processing times do not include the time between notification and the person actually checking their notifications or logging in to see their result; it includes only the test processing time from start to finish. In some cases, test results arriving in the middle of the night might not be checked until hours later. These test processing times also do not include the time between identifying a positive case and the start of contact tracing. I have no data for how long it took CUPHD (initially) or SHIELD-30 (after the surge) to reach people who had tested positive to give them instructions to isolate and to start contact tracing. Any additional delays would amplify the consequences of slow test processing times, as the models showed.
- When calculating the number of people who were positive but didn’t yet know it, I had to combine the timing data and the dashboard data because the timing data had no information on which tests corresponded to a positive test and the dashboard data included no information about individual test timing. I determined the proportion of tests on each test date in the test timing data for which results were unknown on the target date. I multiplied that proportion by the number of positive tests reported on the dashboard for that test date. That gave an estimate of the number of people who had tested positive on that date but hadn’t yet learned their results by the target date. I then summed all of those estimates for all test days on or before the target date to estimate. That provides an estimate of the total number of people who had tested on or before the target date who had hadn’t yet been notified that their test was positive. These estimates likely are conservative. They assume that people learned of their result the moment that the result notification was sent, but if that notification was sent in the middle of the night, they likely didn’t learn of it until the next morning (making the effective test processing time longer). The estimates also use midnight the cutoff for “next day,” so someone who tested on Tuesday morning and was notified at 11:50pm on Tuesday night would not be counted in these tallies even though the notification would have come to late for them to learn of it and isolate before Tuesday evening activities.