Hier vind je interessante publicaties over ons vak. Heb je zelf een suggestie? Stuur hem naar w.schats@nki.nl
Bij elk artikel staat zo mogelijk wie het kan leveren. Staat er niets bij maar kun jij het leveren? Geef het ons door, dat voorkomt dat onze collega’s allemaal afzonderlijk het artikel gaan bestellen.
Melissa L Rethlefsen, Sara Schroter, Lex M Bouter, Jamie J Kirkham, David Moher, Ana Patricia Ayala, David Blanco, Tara J Brigham, Holly K Grossetta Nardini, Shona Kirtley, Kate Nyhan, Whitney Townsend, Maurice Zeegers. Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trial. BMJ Evid Based Med. 2025 Jul 21;30(4):241-249. doi: 10.1136/bmjebm-2024-113527. Abstract
Objective: To evaluate the impact of adding librarians and information specialists (LIS) as methodological peer reviewers to the formal journal peer review process on the quality of search reporting and risk of bias in systematic review searches in the medical literature.
Design: Pragmatic two-group parallel randomised controlled trial.
Setting: Three biomedical journals.
Participants: Systematic reviews and related evidence synthesis manuscripts submitted to The BMJ, BMJ Open and BMJ Medicine and sent out for peer review from 3 January 2023 to 1 September 2023. Randomisation (allocation ratio, 1:1) was stratified by journal and used permuted blocks (block size=4). Of 2670 manuscripts sent to peer review during study enrollment, 400 met inclusion criteria and were randomised (62 The BMJ, 334 BMJ Open, 4 BMJ Medicine). 76 manuscripts were revised and resubmitted in the intervention group and 90 in the control group by 2 January 2024.
Interventions: All manuscripts followed usual journal practice for peer review, but those in the intervention group had an additional (LIS) peer reviewer invited.
Main outcome measures: The primary outcomes are the differences in first revision manuscripts between intervention and control groups in the quality of reporting and risk of bias. Quality of reporting was measured using four prespecified PRISMA-S items. Risk of bias was measured using ROBIS Domain 2. Assessments were done in duplicate and assessors were blinded to group allocation. Secondary outcomes included differences between groups for each individual PRISMA-S and ROBIS Domain 2 item. The difference in the proportion of manuscripts rejected as the first decision post-peer review between the intervention and control groups was an additional outcome.
Results: Differences in the proportion of adequately reported searches (4.4% difference, 95% CI: -2.0% to 10.7%) and risk of bias in searches (0.5% difference, 95% CI: -13.7% to 14.6%) showed no statistically significant differences between groups. By 4 months post-study, 98 intervention and 70 control group manuscripts had been rejected after peer review (13.8% difference, 95% CI: 3.9% to 23.8%).
Conclusions: Inviting LIS peer reviewers did not impact adequate reporting or risk of bias of searches in first revision manuscripts of biomedical systematic reviews and related review types, though LIS peer reviewers may have contributed to a higher rate of rejection after peer review. https://pubmed.ncbi.nlm.nih.gov/40074237 https://doi.org/10.1136/bmjebm-2024-113527
Amaneh Dadashi, Vahideh Zarea Gavgani, Sakineh Hajebrahimi, Mina Mahami-Oskouei. Comparing the performance of librarians and medical specialists in retrieving clinical evidence: an observational study. Med Ref Serv Q. 2025 Apr-Jun;44(2):169-186. doi: 10.1080/02763869.2025.2471886. Epub 2025 May 19.
Abstract
Access to precise and reliable scientific evidence is one of the fundamental principles of Evidence-Based Medicine (EBM) in clinical decision-making processes. Medical librarians, by employing advanced search and information retrieval techniques, play a pivotal role in accessing such evidence. This observational study compared the search and evidence retrieval behaviors of two groups: Medical librarians and medical specialists familiar with EBM and systematic reviews. The study population consisted of 40 participants (20 medical librarians and 20 medical specialists), whose performance in retrieving the best available evidence from credible sources was evaluated using two distinct clinical scenarios. A researcher-developed checklist was created in accordance with the Guidelines for Evaluating Evidence-Based Search Strategies and was utilized to assess the search performance of participants. The findings revealed that medical librarians employed structured search strategies and were more successful in retrieving accurate evidence. They consistently utilized structured search strategies, field-specific search tools, and narrowing techniques in all cases. In contrast, medical specialists spent less time on searches and exhibited a greater tendency to use natural language terms in their search queries. medical specialists did not systematically employ controlled vocabulary or place keywords in specific fields, such as titles, keywords, or abstracts. In conclusion, librarians’ expertise in accessing the best available evidence underscores their crucial role in supporting medical specialists in obtaining and implementing evidence, thereby improving the quality and reliability of evidence-based practices in healthcare settings. https://pubmed.ncbi.nlm.nih.gov/40387123 https://doi.org/10.1080/02763869.2025.2471886
David Petersen, Emily Harris. Trends in Medical and Health Sciences Librarianship: A Comparative Analysis of Job Postings, Salary and Geographic Location, 2022 – 2024. Med Ref Serv Q. 2025 Apr-Jun;44(2):187-198. doi: 10.1080/02763869.2025.2489935. Epub 2025 Apr 14.
Abstract
Job postings for medical and health sciences librarians provide valuable data for those seeking a better understanding of the evolving field of librarianship. Our data indicate a decrease in the number of postings from 2022 to 2024, a modest increase in the percentage of postings advertising remote/hybrid work, an increase in the average minimum posted salary, and a majority of postings focused on one or more public service components of library services. Utilizing this data provides a more complete picture of a profession in transition. https://pubmed.ncbi.nlm.nih.gov/40226974 https://doi.org/10.1080/02763869.2025.2489935
Kyle Robinson, Karen Bontekoe, Joanne Muellenbach. Integrating PICO principles into generative artificial intelligence prompt engineering to enhance information retrieval for medical librarians. J Med Libr Assoc. 2025 Apr 18;113(2):184-188. doi: 10.5195/jmla.2025.2022.
Abstract
Prompt engineering, an emergent discipline at the intersection of Generative Artificial Intelligence (GAI), library science, and user experience design, presents an opportunity to enhance the quality and precision of information retrieval. An innovative approach applies the widely understood PICO framework, traditionally used in evidence-based medicine, to the art of prompt engineering. This approach is illustrated using the “Task, Context, Example, Persona, Format, Tone” (TCEPFT) prompt framework as an example. TCEPFT lends itself to a systematic methodology by incorporating elements of task specificity, contextual relevance, pertinent examples, personalization, formatting, and tonal appropriateness in a prompt design tailored to the desired outcome. Frameworks like TCEPFT offer substantial opportunities for librarians and information professionals to streamline prompt engineering and refine iterative processes. This practice can help information professionals produce consistent and high-quality outputs. Library professionals must embrace a renewed curiosity and develop expertise in prompt engineering to stay ahead in the digital information landscape and maintain their position at the forefront of the sector. https://pubmed.ncbi.nlm.nih.gov/40342302 https://doi.org/10.5195/jmla.2025.2022
Ivan Portillo, David Carson. Making the most of Artificial Intelligence and Large Language Models to support collection development in health sciences libraries. J Med Libr Assoc. 2025 Jan 14;113(1):92-93. doi: 10.5195/jmla.2025.2079
Abstract
This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University’s Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections. https://pubmed.ncbi.nlm.nih.gov/39975505 https://doi.org/10.5195/jmla.2025.2079
Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse. Evaluating a large language model’s ability to answer clinicians’ requests for evidence summaries. J Med Libr Assoc. 2025 Jan 14;113(1):65-77. doi: 10.5195/jmla.2025.1985 Abstract Objective: This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians’ gold-standard evidence syntheses. Methods: Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat. Results: Of the 216 evaluated questions, aiChat’s response was assessed as “correct” for 180 (83.3%) questions, “partially correct” for 35 (16.2%) questions, and “incorrect” for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated. Conclusions: Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians’ workflow. https://pubmed.ncbi.nlm.nih.gov/39975503 https://doi.org/10.5195/jmla.2025.1985
Boglarka Huddleston, Colleen Cuddy. Leveraging AI tools for streamlined library event planning: a case study from Lane Medical Library. J Med Libr Assoc. 2025 Jan 14;113(1):88-89. doi: 10.5195/jmla.2025.2087 Abstract
Health sciences and hospital libraries often face challenges in planning and organizing events due to limited resources and staff. At Stanford School of Medicine’s Lane Library, librarians turned to artificial intelligence (AI) tools to address this issue and successfully manage various events, from small workshops to larger, more complex conferences. This article presents a case study on how to effectively integrate generative AI tools into the event planning process, improving efficiency and freeing staff to focus on higher-level tasks. https://pubmed.ncbi.nlm.nih.gov/39975501 https://doi.org/10.5195/jmla.2025.2087
Katherine B Majewski, Jessi Van Der Volgen. The National Library of Medicine (NLM) Learning Resources Database. Med Ref Serv Q. 2024 Oct-Dec;43(4):326-334. doi: 10.1080/02763869.2024.2414129. Abstract
The National Library of Medicine (NLM) Learning Resources Database provides access to more than 500 educational resources on NLM products and services, including videos, webinars, and tutorials. The database includes resources designed primarily for librarians, health educators, researchers, and clinicians, for finding biomedical literature, research data, chemical, and genetic information. You can search by keyword, subject, language, and audience, and access materials directly or download them for reuse. Resources are reviewed at least annually, and mos.t materials are in the public domain. Features of the site include interactive search options, topic guides, and upcoming training events. NLM also offers API access for integrating resources into other websites. https://pubmed.ncbi.nlm.nih.gov/39495551 https://doi.org/10.1080/02763869.2024.2414129
Glyneva Bradley-Ridout, Robin Parker, Lindsey Sikora, Andrea Quaiattini, Kaitlin Fuller, Margaret Nevison, Erica Nekolaichuk. Exploring librarians’ practices when teaching advanced searching for knowledge synthesis: results from an online survey. J Med Libr Assoc. 2024 Jul 1;112(3):238-249. doi: 10.5195/jmla.2024.1870.
Abstract
Objective: There is little research available regarding the instructional practices of librarians who support students completing knowledge synthesis projects. This study addresses this research gap by identifying the topics taught, approaches, and resources that academic health sciences librarians employ when teaching students how to conduct comprehensive searches for knowledge synthesis projects in group settings.
Methods: This study applies an exploratory-descriptive design using online survey data collection. The final survey instrument included 31 open, closed, and frequency-style questions.
Results: The survey received responses from 114 participants, 74 of whom met the target population. Some key results include shared motivations to teach in groups, including student learning and curriculum requirements, as well as popular types of instruction such as single session seminars, and teaching techniques, such as lectures and live demos.
Conclusion: This research demonstrates the scope and coverage of librarian-led training in the knowledge synthesis research landscape. Although searching related topics such as Boolean logic were the most frequent, librarians report teaching throughout the review process like methods and reporting. Live demos and lectures were the most reported approaches to teaching, whereas gamification or student-driven learning were used rarely. Our results suggest that librarian’s application of formal pedagogical approaches while teaching knowledge synthesis may be under-utilized, as most respondents did not report using any formal instructional framework. https://pubmed.ncbi.nlm.nih.gov/39308911/ https://doi.org/10.5195/jmla.2024.1870
Steven A Lehr, Aylin Caliskan, Suneragiri Liyanage, Mahzarin R Banaji. ChatGPT as Research Scientist: Probing GPT’s capabilities as a Research Librarian, Research Ethicist, Data Generator, and Data Predictor. Proc Natl Acad Sci U S A. 2024 Aug 27;121(35):e2404328121. doi: 10.1073/pnas.2404328121. Abstract
How good a research scientist is ChatGPT? We systematically probed the capabilities of GPT-3.5 and GPT-4 across four central components of the scientific process: as a Research Librarian, Research Ethicist, Data Generator, and Novel Data Predictor, using psychological science as a testing field. In Study 1 (Research Librarian), unlike human researchers, GPT-3.5 and GPT-4 hallucinated, authoritatively generating fictional references 36.0% and 5.4% of the time, respectively, although GPT-4 exhibited an evolving capacity to acknowledge its fictions. In Study 2 (Research Ethicist), GPT-4 (though not GPT-3.5) proved capable of detecting violations like p-hacking in fictional research protocols, correcting 88.6% of blatantly presented issues, and 72.6% of subtly presented issues. In Study 3 (Data Generator), both models consistently replicated patterns of cultural bias previously discovered in large language corpora, indicating that ChatGPT can simulate known results, an antecedent to usefulness for both data generation and skills like hypothesis generation. Contrastingly, in Study 4 (Novel Data Predictor), neither model was successful at predicting new results absent in their training data, and neither appeared to leverage substantially new information when predicting more vs. less novel outcomes. Together, these results suggest that GPT is a flawed but rapidly improving librarian, a decent research ethicist already, capable of data generation in simple domains with known characteristics but poor at predicting novel patterns of empirical data to aid future experimentation. https://pubmed.ncbi.nlm.nih.gov/39163339 https://doi.org/10.1073/pnas.2404328121
Kaitlin Fender Throgmorton, Natalia Festa, Michelle Doering, Christopher R Carpenter, Thomas M Gill. Enhancing the quality and reproducibility of research: How to work effectively with medical and data librarians. J Am Geriatr Soc. 2024 Mar;72(3):965-970. doi: 10.1111/jgs.18741. Epub 2024 Jan 13. https://pubmed.ncbi.nlm.nih.gov/38217346 https://doi.org/10.1111/jgs.18741
Duncan A Q Moore, Ohid Yaqub, Bhaven N Sampat. Manual versus machine: How accurately does the Medical Text Indexer (MTI) classify different document types into disease areas? PLoS One. 2024 Mar 13;19(3):e0297526. doi: 10.1371/journal.pone.0297526. eCollection 2024.
Abstract
The Medical Subject Headings (MeSH) thesaurus is a controlled vocabulary developed by the U.S. National Library of Medicine (NLM) for classifying journal articles. It is increasingly used by researchers studying medical innovation to classify text into disease areas and other categories. Although this process was once manual, human indexers are now assisted by algorithms that automate some of the indexing process. NLM has made one of their algorithms, the Medical Text Indexer (MTI), available to researchers. MTI can be used to easily assign MeSH descriptors to arbitrary text, including from document types other than publications. However, the reliability of extending MTI to other document types has not been studied directly. To assess this, we collected text from grants, patents, and drug indications, and compared MTI’s classification to expert manual classification of the same documents. We examined MTI’s recall (how often correct terms were identified) and found that MTI identified 78% of expert-classified MeSH descriptors for grants, 78% for patents, and 86% for drug indications. This high recall could be driven merely by excess suggestions (at an extreme, all diseases being assigned to a piece of text); therefore, we also examined precision (how often identified terms were correct) and found that most MTI outputs were also identified by expert manual classification: precision was 53% for grant text, 73% for patent text, and 64% for drug indications. Additionally, we found that recall and precision could be improved by (i) utilizing ranking scores provided by MTI, (ii) excluding long documents, and (iii) aggregating to higher MeSH categories. For simply detecting the presence of any disease, MTI showed > 94% recall and > 87% precision. Our overall assessment is that MTI is a potentially useful tool for researchers wishing to classify texts from a variety of sources into disease areas. https://pubmed.ncbi.nlm.nih.gov/38478542 https://doi.org/10.1371/journal.pone.0297526
Joey Nicholson, Caitlin Plovnick, Cees van der Vleuten, Anique B H de Bruin, Adina Kalet. Librarian-Led Assessment of Medical Students’ Evidence-Based Medicine Competency: Facilitators and Barriers. Perspect Med Educ. 2024 Mar 5;13(1):160-168. doi: 10.5334/pme.1145. eCollection 2024.
Abstract
Introduction: We must ensure, through rigorous assessment that physicians have the evidence-based medicine (EBM) skills to identify and apply the best available information to their clinical work. However, there is limited guidance on how to assess EBM competency. With a better understanding of their current role in EBM education, Health Sciences Librarians (HSLs), as experts, should be able to contribute to the assessment of medical student EBM competence. The purpose of this study is to explore the HSLs perspective on EBM assessment practices, both current state and potential future activities.
Methods: We conducted focus groups with librarians from across the United States to explore their perceptions of assessing EBM competence in medical students. Participants had been trained to be raters of EBM competence as part of a novel Objective Structured Clinical Examination (OSCE). This OSCE was just the starting point and the discussion covered topics of current EBM assessment and possibility for expanded responsibilities at their own institutions. We used a reflexive thematic analysis approach to construct themes from our conversations.
Results: We constructed eight themes in four broad categories that influence the success of librarians being able to engage in effective assessment of EBM: administrative, curricular, medical student, and librarian.
Conclusion: Our results inform medical school leadership by pointing out the modifiable factors that enable librarians to be more engaged in conducting effective assessment. They highlight the need for novel tools, like EBM OSCEs, that can address multiple barriers and create opportunities for deeper integration of librarians into assessment processes. https://pubmed.ncbi.nlm.nih.gov/38464960 https://doi.org/10.5334/pme.1145
Colleen Pawliuk, Shannon Cheng, Alex Zheng, Harold Hal Siden. Librarian involvement in systematic reviews was associated with higher quality of reported search methods: a cross-sectional survey. J Clin Epidemiol. 2024 Feb:166:111237. doi: 10.1016/j.jclinepi.2023.111237. Epub 2023 Dec 8.
Abstract
Objectives: Systematic reviews (SRs) are considered the gold standard of evidence, but many published SRs are of poor quality. This study identifies how librarian involvement in SRs is associated with quality-reported methods and examines the lack of motivation for involving a librarian in SRs.
Study design and setting: We searched databases for SRs that were published by a first or last author affiliated to a Vancouver hospital or biomedical research site and published between 2015 and 2019. Corresponding authors of included SRs were contacted through an e-mail survey to determine if a librarian was involved in the SR. If a librarian was involved in the SR, the survey asked at what level the librarian was involved and if a librarian was not involved, the survey asked why. Quality of reported search methods was scored independently by two reviewers. A linear regression model was used to determine the association between quality of reported search methods scores and the level at which a librarian was involved in the study.
Results: One hundred ninety one SRs were included in this study and 118 (62%) of the SRs authors indicated whether a librarian was involved in the SR. SRs that included a librarian as a co-author had a 15.4% higher quality assessment score than SRs that did not include a librarian. Most authors (27; 75%) who did not include a librarian in their SR did not do so because they did not believe it was necessary.
Conclusion: Higher level of librarian involvement in SRs is correlated with higher scores in reported search methods. Greater advocacy or changes at the policy level is necessary to increase librarian involvement in SRs and as a result the quality of their search methods. https://pubmed.ncbi.nlm.nih.gov/38072177 https://doi.org/10.1016/j.jclinepi.2023.111237
Brady D Lund, Daud Khan, Mayank Yuvaraj. ChatGPT in medical libraries, possibilities and future directions: An integrative review. Health Info Libr J. 2024 Mar;41(1):4-15. doi: 10.1111/hir.12518.
Abstract Background: The emergence of the artificial intelligence chatbot ChatGPT in November 2022 has garnered substantial attention across diverse disciplines. Despite widespread adoption in various sectors, the exploration of its application in libraries, especially within the medical domain, remains limited. Aims/objectives: Many areas of interest remain unexplored like ChatGPT in medical libraries and this review aims to synthesise what is currently known about it to identify gaps and stimulate further research.
Methods: Employing Cooper’s integrative review method, this study involves a comprehensive analysis of existing literature on ChatGPT and its potential implementations within library contexts. Results: A systematic literature search across various databases yielded 166 papers, with 30 excluded for irrelevance. After abstract reviews and methodological assessments, 136 articles were selected. Critical Appraisal Skills Programme qualitative checklist further narrowed down to 29 papers, forming the basis for the present study. The literature analysis reveals diverse applications of ChatGPT in medical libraries, including aiding users in finding relevant medical information, answering queries, providing recommendations and facilitating access to resources. Potential challenges and ethical considerations associated with ChatGPT in this context are also highlighted. Conclusion: Positioned as a review, our study elucidates the applications of ChatGPT in medical libraries and discusses relevant considerations. The integration of ChatGPT into medical library services holds promise for enhancing information retrieval and user experience, benefiting library users and the broader medical community. https://pubmed.ncbi.nlm.nih.gov/38200693 https://doi.org/10.1111/hir.12518
Interessante publicaties over ons vak
Hier vind je interessante publicaties over ons vak. Heb je zelf een suggestie? Stuur hem naar w.schats@nki.nl
Bij elk artikel staat zo mogelijk wie het kan leveren. Staat er niets bij maar kun jij het leveren? Geef het ons door, dat voorkomt dat onze collega’s allemaal afzonderlijk het artikel gaan bestellen.
Melissa L Rethlefsen, Sara Schroter, Lex M Bouter, Jamie J Kirkham, David Moher, Ana Patricia Ayala, David Blanco, Tara J Brigham, Holly K Grossetta Nardini, Shona Kirtley, Kate Nyhan, Whitney Townsend, Maurice Zeegers. Improving peer review of systematic reviews and related review types by involving librarians and information specialists as methodological peer reviewers: a randomised controlled trial. BMJ Evid Based Med. 2025 Jul 21;30(4):241-249. doi: 10.1136/bmjebm-2024-113527.
Abstract
Objective: To evaluate the impact of adding librarians and information specialists (LIS) as methodological peer reviewers to the formal journal peer review process on the quality of search reporting and risk of bias in systematic review searches in the medical literature.
Design: Pragmatic two-group parallel randomised controlled trial.
Setting: Three biomedical journals.
Participants: Systematic reviews and related evidence synthesis manuscripts submitted to The BMJ, BMJ Open and BMJ Medicine and sent out for peer review from 3 January 2023 to 1 September 2023. Randomisation (allocation ratio, 1:1) was stratified by journal and used permuted blocks (block size=4). Of 2670 manuscripts sent to peer review during study enrollment, 400 met inclusion criteria and were randomised (62 The BMJ, 334 BMJ Open, 4 BMJ Medicine). 76 manuscripts were revised and resubmitted in the intervention group and 90 in the control group by 2 January 2024.
Interventions: All manuscripts followed usual journal practice for peer review, but those in the intervention group had an additional (LIS) peer reviewer invited.
Main outcome measures: The primary outcomes are the differences in first revision manuscripts between intervention and control groups in the quality of reporting and risk of bias. Quality of reporting was measured using four prespecified PRISMA-S items. Risk of bias was measured using ROBIS Domain 2. Assessments were done in duplicate and assessors were blinded to group allocation. Secondary outcomes included differences between groups for each individual PRISMA-S and ROBIS Domain 2 item. The difference in the proportion of manuscripts rejected as the first decision post-peer review between the intervention and control groups was an additional outcome.
Results: Differences in the proportion of adequately reported searches (4.4% difference, 95% CI: -2.0% to 10.7%) and risk of bias in searches (0.5% difference, 95% CI: -13.7% to 14.6%) showed no statistically significant differences between groups. By 4 months post-study, 98 intervention and 70 control group manuscripts had been rejected after peer review (13.8% difference, 95% CI: 3.9% to 23.8%).
Conclusions: Inviting LIS peer reviewers did not impact adequate reporting or risk of bias of searches in first revision manuscripts of biomedical systematic reviews and related review types, though LIS peer reviewers may have contributed to a higher rate of rejection after peer review.
https://pubmed.ncbi.nlm.nih.gov/40074237
https://doi.org/10.1136/bmjebm-2024-113527
Amaneh Dadashi, Vahideh Zarea Gavgani, Sakineh Hajebrahimi, Mina Mahami-Oskouei. Comparing the performance of librarians and medical specialists in retrieving clinical evidence: an observational study. Med Ref Serv Q. 2025 Apr-Jun;44(2):169-186. doi: 10.1080/02763869.2025.2471886. Epub 2025 May 19.
Abstract
Access to precise and reliable scientific evidence is one of the fundamental principles of Evidence-Based Medicine (EBM) in clinical decision-making processes. Medical librarians, by employing advanced search and information retrieval techniques, play a pivotal role in accessing such evidence. This observational study compared the search and evidence retrieval behaviors of two groups: Medical librarians and medical specialists familiar with EBM and systematic reviews. The study population consisted of 40 participants (20 medical librarians and 20 medical specialists), whose performance in retrieving the best available evidence from credible sources was evaluated using two distinct clinical scenarios. A researcher-developed checklist was created in accordance with the Guidelines for Evaluating Evidence-Based Search Strategies and was utilized to assess the search performance of participants. The findings revealed that medical librarians employed structured search strategies and were more successful in retrieving accurate evidence. They consistently utilized structured search strategies, field-specific search tools, and narrowing techniques in all cases. In contrast, medical specialists spent less time on searches and exhibited a greater tendency to use natural language terms in their search queries. medical specialists did not systematically employ controlled vocabulary or place keywords in specific fields, such as titles, keywords, or abstracts. In conclusion, librarians’ expertise in accessing the best available evidence underscores their crucial role in supporting medical specialists in obtaining and implementing evidence, thereby improving the quality and reliability of evidence-based practices in healthcare settings.
https://pubmed.ncbi.nlm.nih.gov/40387123
https://doi.org/10.1080/02763869.2025.2471886
David Petersen, Emily Harris. Trends in Medical and Health Sciences Librarianship: A Comparative Analysis of Job Postings, Salary and Geographic Location, 2022 – 2024. Med Ref Serv Q. 2025 Apr-Jun;44(2):187-198. doi: 10.1080/02763869.2025.2489935. Epub 2025 Apr 14.
Abstract
Job postings for medical and health sciences librarians provide valuable data for those seeking a better understanding of the evolving field of librarianship. Our data indicate a decrease in the number of postings from 2022 to 2024, a modest increase in the percentage of postings advertising remote/hybrid work, an increase in the average minimum posted salary, and a majority of postings focused on one or more public service components of library services. Utilizing this data provides a more complete picture of a profession in transition.
https://pubmed.ncbi.nlm.nih.gov/40226974
https://doi.org/10.1080/02763869.2025.2489935
Kyle Robinson, Karen Bontekoe, Joanne Muellenbach. Integrating PICO principles into generative artificial intelligence prompt engineering to enhance information retrieval for medical librarians. J Med Libr Assoc. 2025 Apr 18;113(2):184-188. doi: 10.5195/jmla.2025.2022.
Abstract
Prompt engineering, an emergent discipline at the intersection of Generative Artificial Intelligence (GAI), library science, and user experience design, presents an opportunity to enhance the quality and precision of information retrieval. An innovative approach applies the widely understood PICO framework, traditionally used in evidence-based medicine, to the art of prompt engineering. This approach is illustrated using the “Task, Context, Example, Persona, Format, Tone” (TCEPFT) prompt framework as an example. TCEPFT lends itself to a systematic methodology by incorporating elements of task specificity, contextual relevance, pertinent examples, personalization, formatting, and tonal appropriateness in a prompt design tailored to the desired outcome. Frameworks like TCEPFT offer substantial opportunities for librarians and information professionals to streamline prompt engineering and refine iterative processes. This practice can help information professionals produce consistent and high-quality outputs. Library professionals must embrace a renewed curiosity and develop expertise in prompt engineering to stay ahead in the digital information landscape and maintain their position at the forefront of the sector.
https://pubmed.ncbi.nlm.nih.gov/40342302
https://doi.org/10.5195/jmla.2025.2022
Ivan Portillo, David Carson. Making the most of Artificial Intelligence and Large Language Models to support collection development in health sciences libraries. J Med Libr Assoc. 2025 Jan 14;113(1):92-93. doi: 10.5195/jmla.2025.2079
Abstract
This project investigated the potential of generative AI models in aiding health sciences librarians with collection development. Researchers at Chapman University’s Harry and Diane Rinker Health Science campus evaluated four generative AI models-ChatGPT 4.0, Google Gemini, Perplexity, and Microsoft Copilot-over six months starting in March 2024. Two prompts were used: one to generate recent eBook titles in specific health sciences fields and another to identify subject gaps in the existing collection. The first prompt revealed inconsistencies across models, with Copilot and Perplexity providing sources but also inaccuracies. The second prompt yielded more useful results, with all models offering helpful analysis and accurate Library of Congress call numbers. The findings suggest that Large Language Models (LLMs) are not yet reliable as primary tools for collection development due to inaccuracies and hallucinations. However, they can serve as supplementary tools for analyzing subject coverage and identifying gaps in health sciences collections.
https://pubmed.ncbi.nlm.nih.gov/39975505
https://doi.org/10.5195/jmla.2025.2079
Mallory N Blasingame, Taneya Y Koonce, Annette M Williams, Dario A Giuse, Jing Su, Poppy A Krump, Nunzia Bettinsoli Giuse. Evaluating a large language model’s ability to answer clinicians’ requests for evidence summaries. J Med Libr Assoc. 2025 Jan 14;113(1):65-77. doi: 10.5195/jmla.2025.1985
Abstract
Objective: This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians’ gold-standard evidence syntheses.
Methods: Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat.
Results: Of the 216 evaluated questions, aiChat’s response was assessed as “correct” for 180 (83.3%) questions, “partially correct” for 35 (16.2%) questions, and “incorrect” for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.
Conclusions: Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians’ workflow.
https://pubmed.ncbi.nlm.nih.gov/39975503
https://doi.org/10.5195/jmla.2025.1985
Boglarka Huddleston, Colleen Cuddy. Leveraging AI tools for streamlined library event planning: a case study from Lane Medical Library. J Med Libr Assoc. 2025 Jan 14;113(1):88-89. doi: 10.5195/jmla.2025.2087
Abstract
Health sciences and hospital libraries often face challenges in planning and organizing events due to limited resources and staff. At Stanford School of Medicine’s Lane Library, librarians turned to artificial intelligence (AI) tools to address this issue and successfully manage various events, from small workshops to larger, more complex conferences. This article presents a case study on how to effectively integrate generative AI tools into the event planning process, improving efficiency and freeing staff to focus on higher-level tasks.
https://pubmed.ncbi.nlm.nih.gov/39975501
https://doi.org/10.5195/jmla.2025.2087
Katherine B Majewski, Jessi Van Der Volgen. The National Library of Medicine (NLM) Learning Resources Database. Med Ref Serv Q. 2024 Oct-Dec;43(4):326-334. doi: 10.1080/02763869.2024.2414129.
Abstract
The National Library of Medicine (NLM) Learning Resources Database provides access to more than 500 educational resources on NLM products and services, including videos, webinars, and tutorials. The database includes resources designed primarily for librarians, health educators, researchers, and clinicians, for finding biomedical literature, research data, chemical, and genetic information. You can search by keyword, subject, language, and audience, and access materials directly or download them for reuse. Resources are reviewed at least annually, and mos.t materials are in the public domain. Features of the site include interactive search options, topic guides, and upcoming training events. NLM also offers API access for integrating resources into other websites.
https://pubmed.ncbi.nlm.nih.gov/39495551
https://doi.org/10.1080/02763869.2024.2414129
Glyneva Bradley-Ridout, Robin Parker, Lindsey Sikora, Andrea Quaiattini, Kaitlin Fuller, Margaret Nevison, Erica Nekolaichuk. Exploring librarians’ practices when teaching advanced searching for knowledge synthesis: results from an online survey. J Med Libr Assoc. 2024 Jul 1;112(3):238-249. doi: 10.5195/jmla.2024.1870.
Abstract
Objective: There is little research available regarding the instructional practices of librarians who support students completing knowledge synthesis projects. This study addresses this research gap by identifying the topics taught, approaches, and resources that academic health sciences librarians employ when teaching students how to conduct comprehensive searches for knowledge synthesis projects in group settings.
Methods: This study applies an exploratory-descriptive design using online survey data collection. The final survey instrument included 31 open, closed, and frequency-style questions.
Results: The survey received responses from 114 participants, 74 of whom met the target population. Some key results include shared motivations to teach in groups, including student learning and curriculum requirements, as well as popular types of instruction such as single session seminars, and teaching techniques, such as lectures and live demos.
Conclusion: This research demonstrates the scope and coverage of librarian-led training in the knowledge synthesis research landscape. Although searching related topics such as Boolean logic were the most frequent, librarians report teaching throughout the review process like methods and reporting. Live demos and lectures were the most reported approaches to teaching, whereas gamification or student-driven learning were used rarely. Our results suggest that librarian’s application of formal pedagogical approaches while teaching knowledge synthesis may be under-utilized, as most respondents did not report using any formal instructional framework.
https://pubmed.ncbi.nlm.nih.gov/39308911/
https://doi.org/10.5195/jmla.2024.1870
Steven A Lehr, Aylin Caliskan, Suneragiri Liyanage, Mahzarin R Banaji. ChatGPT as Research Scientist: Probing GPT’s capabilities as a Research Librarian, Research Ethicist, Data Generator, and Data Predictor. Proc Natl Acad Sci U S A. 2024 Aug 27;121(35):e2404328121. doi: 10.1073/pnas.2404328121.
Abstract
How good a research scientist is ChatGPT? We systematically probed the capabilities of GPT-3.5 and GPT-4 across four central components of the scientific process: as a Research Librarian, Research Ethicist, Data Generator, and Novel Data Predictor, using psychological science as a testing field. In Study 1 (Research Librarian), unlike human researchers, GPT-3.5 and GPT-4 hallucinated, authoritatively generating fictional references 36.0% and 5.4% of the time, respectively, although GPT-4 exhibited an evolving capacity to acknowledge its fictions. In Study 2 (Research Ethicist), GPT-4 (though not GPT-3.5) proved capable of detecting violations like p-hacking in fictional research protocols, correcting 88.6% of blatantly presented issues, and 72.6% of subtly presented issues. In Study 3 (Data Generator), both models consistently replicated patterns of cultural bias previously discovered in large language corpora, indicating that ChatGPT can simulate known results, an antecedent to usefulness for both data generation and skills like hypothesis generation. Contrastingly, in Study 4 (Novel Data Predictor), neither model was successful at predicting new results absent in their training data, and neither appeared to leverage substantially new information when predicting more vs. less novel outcomes. Together, these results suggest that GPT is a flawed but rapidly improving librarian, a decent research ethicist already, capable of data generation in simple domains with known characteristics but poor at predicting novel patterns of empirical data to aid future experimentation.
https://pubmed.ncbi.nlm.nih.gov/39163339
https://doi.org/10.1073/pnas.2404328121
Kaitlin Fender Throgmorton, Natalia Festa, Michelle Doering, Christopher R Carpenter, Thomas M Gill. Enhancing the quality and reproducibility of research: How to work effectively with medical and data librarians. J Am Geriatr Soc. 2024 Mar;72(3):965-970. doi: 10.1111/jgs.18741. Epub 2024 Jan 13.
https://pubmed.ncbi.nlm.nih.gov/38217346
https://doi.org/10.1111/jgs.18741
Duncan A Q Moore, Ohid Yaqub, Bhaven N Sampat. Manual versus machine: How accurately does the Medical Text Indexer (MTI) classify different document types into disease areas? PLoS One. 2024 Mar 13;19(3):e0297526. doi: 10.1371/journal.pone.0297526. eCollection 2024.
Abstract
The Medical Subject Headings (MeSH) thesaurus is a controlled vocabulary developed by the U.S. National Library of Medicine (NLM) for classifying journal articles. It is increasingly used by researchers studying medical innovation to classify text into disease areas and other categories. Although this process was once manual, human indexers are now assisted by algorithms that automate some of the indexing process. NLM has made one of their algorithms, the Medical Text Indexer (MTI), available to researchers. MTI can be used to easily assign MeSH descriptors to arbitrary text, including from document types other than publications. However, the reliability of extending MTI to other document types has not been studied directly. To assess this, we collected text from grants, patents, and drug indications, and compared MTI’s classification to expert manual classification of the same documents. We examined MTI’s recall (how often correct terms were identified) and found that MTI identified 78% of expert-classified MeSH descriptors for grants, 78% for patents, and 86% for drug indications. This high recall could be driven merely by excess suggestions (at an extreme, all diseases being assigned to a piece of text); therefore, we also examined precision (how often identified terms were correct) and found that most MTI outputs were also identified by expert manual classification: precision was 53% for grant text, 73% for patent text, and 64% for drug indications. Additionally, we found that recall and precision could be improved by (i) utilizing ranking scores provided by MTI, (ii) excluding long documents, and (iii) aggregating to higher MeSH categories. For simply detecting the presence of any disease, MTI showed > 94% recall and > 87% precision. Our overall assessment is that MTI is a potentially useful tool for researchers wishing to classify texts from a variety of sources into disease areas.
https://pubmed.ncbi.nlm.nih.gov/38478542
https://doi.org/10.1371/journal.pone.0297526
Joey Nicholson, Caitlin Plovnick, Cees van der Vleuten, Anique B H de Bruin, Adina Kalet. Librarian-Led Assessment of Medical Students’ Evidence-Based Medicine Competency: Facilitators and Barriers. Perspect Med Educ. 2024 Mar 5;13(1):160-168. doi: 10.5334/pme.1145. eCollection 2024.
Abstract
Introduction: We must ensure, through rigorous assessment that physicians have the evidence-based medicine (EBM) skills to identify and apply the best available information to their clinical work. However, there is limited guidance on how to assess EBM competency. With a better understanding of their current role in EBM education, Health Sciences Librarians (HSLs), as experts, should be able to contribute to the assessment of medical student EBM competence. The purpose of this study is to explore the HSLs perspective on EBM assessment practices, both current state and potential future activities.
Methods: We conducted focus groups with librarians from across the United States to explore their perceptions of assessing EBM competence in medical students. Participants had been trained to be raters of EBM competence as part of a novel Objective Structured Clinical Examination (OSCE). This OSCE was just the starting point and the discussion covered topics of current EBM assessment and possibility for expanded responsibilities at their own institutions. We used a reflexive thematic analysis approach to construct themes from our conversations.
Results: We constructed eight themes in four broad categories that influence the success of librarians being able to engage in effective assessment of EBM: administrative, curricular, medical student, and librarian.
Conclusion: Our results inform medical school leadership by pointing out the modifiable factors that enable librarians to be more engaged in conducting effective assessment. They highlight the need for novel tools, like EBM OSCEs, that can address multiple barriers and create opportunities for deeper integration of librarians into assessment processes.
https://pubmed.ncbi.nlm.nih.gov/38464960
https://doi.org/10.5334/pme.1145
Colleen Pawliuk, Shannon Cheng, Alex Zheng, Harold Hal Siden. Librarian involvement in systematic reviews was associated with higher quality of reported search methods: a cross-sectional survey. J Clin Epidemiol. 2024 Feb:166:111237. doi: 10.1016/j.jclinepi.2023.111237. Epub 2023 Dec 8.
Abstract
Objectives: Systematic reviews (SRs) are considered the gold standard of evidence, but many published SRs are of poor quality. This study identifies how librarian involvement in SRs is associated with quality-reported methods and examines the lack of motivation for involving a librarian in SRs.
Study design and setting: We searched databases for SRs that were published by a first or last author affiliated to a Vancouver hospital or biomedical research site and published between 2015 and 2019. Corresponding authors of included SRs were contacted through an e-mail survey to determine if a librarian was involved in the SR. If a librarian was involved in the SR, the survey asked at what level the librarian was involved and if a librarian was not involved, the survey asked why. Quality of reported search methods was scored independently by two reviewers. A linear regression model was used to determine the association between quality of reported search methods scores and the level at which a librarian was involved in the study.
Results: One hundred ninety one SRs were included in this study and 118 (62%) of the SRs authors indicated whether a librarian was involved in the SR. SRs that included a librarian as a co-author had a 15.4% higher quality assessment score than SRs that did not include a librarian. Most authors (27; 75%) who did not include a librarian in their SR did not do so because they did not believe it was necessary.
Conclusion: Higher level of librarian involvement in SRs is correlated with higher scores in reported search methods. Greater advocacy or changes at the policy level is necessary to increase librarian involvement in SRs and as a result the quality of their search methods.
https://pubmed.ncbi.nlm.nih.gov/38072177
https://doi.org/10.1016/j.jclinepi.2023.111237
Brady D Lund, Daud Khan, Mayank Yuvaraj. ChatGPT in medical libraries, possibilities and future directions: An integrative review. Health Info Libr J. 2024 Mar;41(1):4-15. doi: 10.1111/hir.12518.
Abstract
Background: The emergence of the artificial intelligence chatbot ChatGPT in November 2022 has garnered substantial attention across diverse disciplines. Despite widespread adoption in various sectors, the exploration of its application in libraries, especially within the medical domain, remains limited.
Aims/objectives: Many areas of interest remain unexplored like ChatGPT in medical libraries and this review aims to synthesise what is currently known about it to identify gaps and stimulate further research.
Methods: Employing Cooper’s integrative review method, this study involves a comprehensive analysis of existing literature on ChatGPT and its potential implementations within library contexts.
Results: A systematic literature search across various databases yielded 166 papers, with 30 excluded for irrelevance. After abstract reviews and methodological assessments, 136 articles were selected. Critical Appraisal Skills Programme qualitative checklist further narrowed down to 29 papers, forming the basis for the present study. The literature analysis reveals diverse applications of ChatGPT in medical libraries, including aiding users in finding relevant medical information, answering queries, providing recommendations and facilitating access to resources. Potential challenges and ethical considerations associated with ChatGPT in this context are also highlighted.
Conclusion: Positioned as a review, our study elucidates the applications of ChatGPT in medical libraries and discusses relevant considerations. The integration of ChatGPT into medical library services holds promise for enhancing information retrieval and user experience, benefiting library users and the broader medical community.
https://pubmed.ncbi.nlm.nih.gov/38200693
https://doi.org/10.1111/hir.12518
BMI nieuws