Screen failure data in clinical trials: Are screening logs worth it?
2014 Aug
Journal Article
Authors:
Elm, J.J.;
Palesch, Y.;
Easton, D.;
Lindblad, A.;
Barsan, W.;
Silbergleit, R.;
Conwit, R.;
Dillon, C.;
Farrant, M.;
Battenhouse, H.;
Perlmutter, A.;
Johnston, C.
Secondary:
Clin Trials
Volume:
11
Pagination:
467-472
Issue:
4
PMID:
24925082
Abstract:
BACKGROUND: Clinical trials frequently spend considerable effort to collect data on patients who were assessed for eligibility but not enrolled. The Consolidated Standards of Reporting Trials (CONSORT) guidelines' recommended flow diagram for randomized clinical trials reinforces the belief that the collection of screening data is a necessary and worthwhile endeavor. The rationale for collecting screening data includes scientific, trial management, and ethno-socio-cultural reasons.PURPOSE: We posit that the cost of collecting screening data is not justified, in part due to inability to centrally monitor and verify the screening data in the same manner as other clinical trial data.METHODS: To illustrate the effort and site-to-site variability, we analyzed the screening data from a multicenter, randomized clinical trial of patients with transient ischemic attack or minor ischemic stroke (Platelet-Oriented Inhibition in New Transient Ischemic Attack and Minor Ischemic Stroke (POINT)).RESULTS: Data were collected on over 27,000 patients screened across 172 enrolling sites, 95% of whom were not enrolled. Although the rate of return of screen failure logs was high overall (95%), there were a considerable number of logs that were returned with 'no data to report' (23%), often due to administrative reasons rather than no patients screened.CONCLUSION: In spite of attempts to standardize the collection of screening data, due to differences in site processes, multicenter clinical trials face challenges in collecting those data completely and uniformly. The efforts required to centrally collect high-quality data on an extensive number of screened patients may outweigh the scientific value of the data. Moreover, the lack of a standardized definition of 'screened' and the challenges of collecting meaningful characteristics for patients who have not signed consent limits the ability to compare across studies and to assess generalizability and selection bias as intended.