Scores of responses by doctors and ChatGPT on the Swedish family medicine specialist exam
SND-ID: 2023-311. Version: 1. DOI: https://doi.org/10.5878/j8jh-5128
Download data
Associated documentation
Download all files
Citation
Creator/Principal investigator(s)
Rasmus Arvidsson - University of Gothenburg, Institute of Medicine, School of Public Health and Community Medicine
Ronny Gunnarsson - University of Gothenburg, Institute of Medicine, School of Public Health and Community Medicine
Artin Entezarjou - University of Gothenburg, Institute of Medicine, School of Public Health and Community Medicine
David Sundemo - University of Gothenburg, Institute of Medicine, School of Public Health and Community Medicine
Carl Wikberg - University of Gothenburg, Institute of Medicine, School of Public Health and Community Medicine
Research principal
University of Gothenburg - Institute of Medicine
Description
The scores from zero to ten for the cases of exam years 2017-2022. For more details, see README.txt.
Data contains personal data
No
Language
Population
Anonymous responses from SFAM's specialist exam in general medicine 2017-2022 and responses from ChatGPT to the same cases.
Time Method
Study design
Observational study
Description of study design
ChatGPT’s scores were compared with that of real doctors using cases from the Swedish family medicine specialist exam.
Sampling procedure
2. Top tier doctor responses - a response for each case chosen by the exam reviewers as an example of a very good response.
3. ChatGPT responses - responses provided by ChatGPT.-4, August 3 Version 2023.
Time period(s) investigated
2017 – 2022
Data format / data structure
Geographic spread
Geographic location: Sweden
Responsible department/unit
Institute of Medicine
Research area
Other medical engineering (Standard för svensk indelning av forskningsämnen 2011)
General practice (Standard för svensk indelning av forskningsämnen 2011)
Other medical and health sciences not elsewhere specified (Standard för svensk indelning av forskningsämnen 2011)