Rebecca Carlson


Sessions

06-04
11:20
20min
Paper: Anything you can do, AI can do better... Or can it? Comparing ChatGPT's Search Strategy Outputs with Cochrane Review Searches
Emily Jones, Rebecca Carlson

Objective: Previous studies have measured ChatGPT’s capabilities for completing literature search tasks. This study seeks to assess ChatGPT’s capability to produce comprehensive search strategies for systematic reviews, specifically comparing AI-generated outputs against published Cochrane review searches for precision and recall.

Methods: We created a test set of 9 PubMed search strategies from recent Cochrane. A script was created and ChatGPT was queried using each Cochrane review topic, research question(s), and inclusion criteria to generate a relevant PubMed search strategy. Precision and recall were measured using the Cochrane reviews’ PubMed search strategies and included articles as the standard and ChatGPT searches were evaluated using PRESS.

Results: GenAI search strategies had lower recall and lower precision on average when compared to Cochrane search strategies. The GenAI search strategies had an average recall of 57.6% (ranging from 0% to 100%) and an average precision of 1.51% (ranging from 0% to 4.17%), while the Cochrane search strategies had an average recall of 93.7% and an average precision of 2.39%. PRESS evaluations revealed errors including hallucinated MeSH terms and issues with keywords. The results indicate that ChatGPT could be used to help develop comprehensive literature search strategies for systematic reviews, but not without librarian oversight.

Conclusion: Results of this project provide a current estimation of whether, and to what extent, ChatGPT could be used to develop literature search strategies for systematic reviews. This project adds to the literature on GenAI uses for systematic reviews and informs librarians of the potential of these tools for comprehensive literature search development.

AI
2314
06-05
14:30
20min
Paper: Optimizing Communication and Data Collection for a Systematic Review Team Using Microsoft Power Automate®
Emily Jones, Rebecca Carlson

Background: Libraries with systematic review (SR) services track and collect data on requests to manage workload and to make administrative decisions like hiring or acquiring resources based on demand. Librarians rely on technology, often selected based on institutional subscriptions, for internal tracking, communication, and data collection. However, many libraries rely on manual data entry despite available low-code software like Microsoft Power Automate or Zapier that could automate and optimize team workflows. This case study describes how a SR coordinator used Power Automate flows to automate email reminders, centralize workflows, collect data, and ensure requests were claimed by librarians across a large team.

Description: We created a Power Automate workflow to automatically email our team of a new request upon submission. This information is transferred to our tracking system, Microsoft Lists, which is embedded into our Teams site for convenience. Librarians can claim requests and add tags, notes, or files. Finally, the form submission updates a backup Excel file we use for statistics and visualizations. These processes ensure information is centralized and automated, so team members do not have to locate or update information manually.

Conclusion: We demonstrate how to optimize and integrate existing tools using low-code software. This strategy is not exclusive to Microsoft and is transferable to Google or other major office management software. Additional integrations including Planner are available for those preferring Kanban-style tools.

Library Services & Management
2306/2309
06-05
16:00
0min
Poster: Effect of citation numbers and team members on the likelihood of and time needed to complete screening for systematic and scoping reviews
Emily Jones, Rebecca Carlson

Objective: To identify the effect that the total number of citations and team members has on the likelihood of completion and time needed for screening.

Methods: We obtained institutional review data of a large research university from Covidence. Data included review name, type, area, date created and last active, number of collaborators, presence of librarian collaboration, and the number of citations imported, screened, and removed at each step. Data were cleaned to remove items that were not true reviews and were analyzed using SPSS linear regression and independent sample Mann-Whitney U tests. Outcomes included the effect of number of total citations, number of citations per collaborator, and librarian collaboration on the percentage screened and time needed to complete title/abstract and full-text screening.

Results: The fewer citations and the fewer citations per collaborator, the more likely the team is to complete title/abstract and full-text screening, and the faster they will finish the screening process. This relationship was stronger for number of citations per collaborator than number of citations alone. There was no significant difference between the percentage screened in title/abstract for reviews with versus without librarian collaboration. However, reviews without librarian collaboration had significantly higher median percentage of full texts screened.

Conclusions: This study allows librarians to provide more informed guidance to teams on elements that may increase the likelihood of screening completion for systematic and scoping reviews. It emphasizes the importance of narrowing the scope of a review or increasing the size of the team to make screening completion more achievable.

Knowledge Synthesis
Great Hall