Please use this identifier to cite or link to this item:
http://dx.doi.org/10.25673/119385
Title: | Large language models outperform humans in identifying neuromyths but show sycophantic behavior in applied contexts |
Author(s): | Richter, Eileen Spitzer, Markus Wolfgang Hermann ![]() Morgan, Annabelle Frede, Luisa Weidlich, Joshua ![]() Möller, Korbinian ![]() |
Issue Date: | 2025 |
Type: | Article |
Language: | English |
Abstract: | Background: Neuromyths are widespread among educators, which raises concerns about misconceptions regarding the (neural) principles underlying learning in the educator population. With the increasing use of large language models (LLMs) in education, educators are increasingly relying on these for lesson planning and professional development. Therefore, if LLMs correctly identify neuromyths, they may help to dispute related misconceptions. Method: We evaluated whether LLMs can correctly identify neuromyths and whether they may hint educators to neuromyths in applied contexts when users ask questions comprising related misconceptions. Additionally, we examined whether explicitly prompting LLMs to base their answer on scientific evidence or to correct unsupported assumptions would decrease errors in identifying neuromyths. Results: LLMs outperformed humans in identifying neuromyth statements as used in previous studies. However, when presented with applied user-like questions comprising misconceptions, they struggled to highlight or dispute these. Interestingly, explicitly asking LLMs to correct unsupported assumptions increased the likelihood that misconceptions were flagged considerably, while prompting the models to rely on scientific evidence had only little effects. Conclusion: While LLMs outperformed humans at identifying isolated neuromyth statements, they struggled to hint users towards the same misconception when they were included in more applied user-like questions—presumably due to LLMs’ tendency toward sycophantic responses. This limitation suggests that, despite their potential, LLMs are not yet a reliable safeguard against the spread of neuromyths in educational settings. However, when users explicitly prompt LLMs to correct unsupported assumptions—an approach that may initially seem counterintuitive–this effectively reduced sycophantic responses. |
URI: | https://opendata.uni-halle.de//handle/1981185920/121343 |
Open Access: | ![]() |
License: | ![]() |
Journal Title: | Trends in Neuroscience and Education |
Publisher: | Elsevier |
Publisher Place: | Amsterdam [u.a.] |
Volume: | 39 |
Original Publication: | 10.1016/j.tine.2025.100255 |
Page Start: | 1 |
Page End: | 11 |
Appears in Collections: | Open Access Publikationen der MLU |
Files in This Item:
File | Size | Format | |
---|---|---|---|
1-s2.0-S2211949325000092-main.pdf | 830.12 kB | Adobe PDF | View/Open |