Keywords
self-assessment, self-assessment classification scale, Dunning-Kruger Effect, knowledge surveys, graphs, numeracy, random number simulation, noise, signal
Abstract
Despite nearly two decades of research, researchers have not resolved whether people generally perceive their skills accurately or inaccurately. In this paper, we trace this lack of resolution to numeracy, specifically to the frequently overlooked complications that arise from the noisy data produced by the paired measures that researchers employ to determine self-assessment accuracy. To illustrate the complications and ways to resolve them, we employ a large dataset (N = 1154) obtained from paired measures of documented reliability to study self-assessed proficiency in science literacy. We collected demographic information that allowed both criterion-referenced and normative-based analyses of self-assessment data. We used these analyses to propose a quantitatively based classification scale and show how its use informs the nature of self-assessment. Much of the current consensus about peoples' inability to self-assess accurately comes from interpreting normative data presented in the Kruger-Dunning type graphical format or closely related (y - x) vs. (x) graphical conventions. Our data show that peoples' self-assessments of competence, in general, reflect a genuine competence that they can demonstrate. That finding contradicts the current consensus about the nature of self-assessment. Our results further confirm that experts are more proficient in self-assessing their abilities than novices and that women, in general, self-assess more accurately than men. The validity of interpretations of data depends strongly upon how carefully the researchers consider the numeracy that underlies graphical presentations and conclusions. Our results indicate that carefully measured self-assessments provide valid, measurable and valuable information about proficiency.
DOI
http://dx.doi.org/10.5038/1936-4660.10.1.4
Recommended Citation
Nuhfer, Edward, Steven Fleisher, Christopher Cogan, Karl Wirth, and Eric Gaze. "How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data: Numeracy Underlies Better Alternatives." Numeracy 10, Iss. 1 (2017): Article 4. DOI: http://dx.doi.org/10.5038/1936-4660.10.1.4
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License
Supplemental material: Explanations and examples
Numeracy 10(1), Nuhfer et al., Appendix B.xlsx (666 kB)
Dataset: xlsx file
Included in
Arts and Humanities Commons, Life Sciences Commons, Physical Sciences and Mathematics Commons, Social and Behavioral Sciences Commons