•  
  •  
 

Keywords

rubric, grading, assessment, quantitative literacy/reasoning, pedagogy, teaching

Abstract

Institutional assessments of quantitative literacy/reasoning (QL/QR) have been extensively tested and reported in the literature. While appropriate for measuring student learning at the programmatic or institutional level, such instruments were not designed for classroom grading. After modifying a widely accepted institutional rubric designed to assess QR in written arguments, the current mixed method study tested the reliability of two QR analytic grading rubrics for written arguments and explored students’ reactions to the grading tools. Undergraduate students enrolled in a business course (N = 59) participated. A total of 415 QR artifacts from 40 students were assessed; an additional 19 students provided feedback about the grading tools. A new QR writing rubric included three main criteria (numerical evidence, conclusions, and writing), while a second rubric added a fourth criterion for assignments with data visualization. After two coders rated students’ QR assignments, data analysis found both new QR rubrics had good reliability. Cohen’s kappa found the study’s raters had substantial agreement on all rubric criteria (κ = 0.69 to 0.80). Both the QR writing (α = 0.861) and data visualization (α = 0.859) grading rubrics also had good internal consistency. When asked to provide feedback about the new grading tools, 89% of students shared positive comments, reporting the rubrics clarified assignment expectations, improved their performance, and facilitated the writing process. This paper proposes slight modifications to the phrasing of the new rubrics’ writing criterion, discusses best practices for use of rubrics in QR classrooms, and recommends future research.

DOI

https://doi.org/10.5038/1936-4660.16.1.1431

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS