04/06/2025 –, 2314
Langue: English
Introduction: The National Collaborating Centre for Methods and Tools (NCCMT) established its Rapid Evidence Service (RES) to support evidence-informed public health by conducting timely, rapid reviews on priority topics. Artificial intelligence (AI) screening features offer the potential to automate and expedite the review process, while reducing unintended human biases or errors, but there is limited evidence to quantify this impact. This study aims to evaluate how AI compares to manual screening with respect to missed studies, impact on overall review findings, and time to complete.
Methods: Two AI features, Re-Rank and Check Screening Errors, were compared to manual dual screening during title and abstract screening (DistillerSR, v2.35). As screening occurred in each review, project clones were made at likelihood thresholds of 60-95%, AI screened remaining references, and potential false excludes were identified. These AI-screened results were compared with manual screening in the original projects to identify how many studies would have been missed at each threshold. The impact of omitting missed studies on the review’s key findings was assessed. Finally, time spent screening was tracked across reviews.
Results: Six rapid reviews were conducted during the study period. In preliminary data analysis, AI correctly excluded up to 60% (2600 out of 4100 studies) at a prediction threshold as low as 80% in one review; time spent screening was 47 hours.
Discussion: AI is a promising support tool for improving screening efficiency and accuracy. Additional study is needed to understand how AI can be most appropriately integrated into rapid review methods.