Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: http://dx.doi.org/10.25673/119385
Titel: Large language models outperform humans in identifying neuromyths but show sycophantic behavior in applied contexts
Autor(en): Richter, Eileen
Spitzer, Markus Wolfgang HermannIn der Gemeinsamen Normdatei der DNB nachschlagen
Morgan, Annabelle
Frede, Luisa
Weidlich, JoshuaIn der Gemeinsamen Normdatei der DNB nachschlagen
Möller, KorbinianIn der Gemeinsamen Normdatei der DNB nachschlagen
Erscheinungsdatum: 2025
Art: Artikel
Sprache: Englisch
Zusammenfassung: Background: Neuromyths are widespread among educators, which raises concerns about misconceptions regarding the (neural) principles underlying learning in the educator population. With the increasing use of large language models (LLMs) in education, educators are increasingly relying on these for lesson planning and professional development. Therefore, if LLMs correctly identify neuromyths, they may help to dispute related misconceptions. Method: We evaluated whether LLMs can correctly identify neuromyths and whether they may hint educators to neuromyths in applied contexts when users ask questions comprising related misconceptions. Additionally, we examined whether explicitly prompting LLMs to base their answer on scientific evidence or to correct unsupported assumptions would decrease errors in identifying neuromyths. Results: LLMs outperformed humans in identifying neuromyth statements as used in previous studies. However, when presented with applied user-like questions comprising misconceptions, they struggled to highlight or dispute these. Interestingly, explicitly asking LLMs to correct unsupported assumptions increased the likelihood that misconceptions were flagged considerably, while prompting the models to rely on scientific evidence had only little effects. Conclusion: While LLMs outperformed humans at identifying isolated neuromyth statements, they struggled to hint users towards the same misconception when they were included in more applied user-like questions—presumably due to LLMs’ tendency toward sycophantic responses. This limitation suggests that, despite their potential, LLMs are not yet a reliable safeguard against the spread of neuromyths in educational settings. However, when users explicitly prompt LLMs to correct unsupported assumptions—an approach that may initially seem counterintuitive–this effectively reduced sycophantic responses.
URI: https://opendata.uni-halle.de//handle/1981185920/121343
http://dx.doi.org/10.25673/119385
Open-Access: Open-Access-Publikation
Nutzungslizenz: (CC BY-NC-ND 4.0) Creative Commons Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International(CC BY-NC-ND 4.0) Creative Commons Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International
Journal Titel: Trends in Neuroscience and Education
Verlag: Elsevier
Verlagsort: Amsterdam [u.a.]
Band: 39
Originalveröffentlichung: 10.1016/j.tine.2025.100255
Seitenanfang: 1
Seitenende: 11
Enthalten in den Sammlungen:Open Access Publikationen der MLU

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
1-s2.0-S2211949325000092-main.pdf830.12 kBAdobe PDFMiniaturbild
Öffnen/Anzeigen