Graduation Year

2022

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Chemistry

Major Professor

Jeffrey R. Raker, Ph.D.

Committee Member

James W. Leahy, Ph.D.

Committee Member

Scott E. Lewis, Ph.D.

Committee Member

Eun Sook Kim, Ph.D.

Keywords

Faculty beliefs, Lewis acid–base, Machine learning, Multilevel modeling, Nucleophiles

Abstract

The work in this dissertation is presented in two parts. The first part (Chapters 3 and 4) details work relating to students’ explanations of reaction mechanisms in organic chemistry. The second part (Chapters 5 and 6) details work relating to the evaluating the uptake of research-based instructional practices in introductory chemistry, mathematics, and physics courses.

To evaluate learning of organic chemistry reactions, instructors must ask students to construct written explanations of reaction mechanisms. Written assessments should focus on what is happening and why it is happening to promote deeper student understanding. However, for instructors to gain insight into students’ understanding, the time and effort to evaluate the explanations is prohibitive. Further, such evaluation is often not standardized or grounded in the research literature. Lexical analysis and machine learning algorithms can be used to score students’ written responses to open-ended constructed response items to alleviate grading burdens. Specifically, lexical analysis and machine learning techniques can be used produce predictive models that aid with evaluating the quality of students’ explanations. Rubrics are an additional tool that can help students and instructors formatively assess written explanations. In this work, a generalized predictive model using machine learning techniques was developed to evaluate students’ understanding of acid–base reactions and properties of acids and bases through the correct use of the Lewis acid–base model (Chapter 3). Research on students’ understanding of nucleophiles grounded the development of a generalized rubric to evaluate students’ level of written explanation sophistication for nucleophiles (Chapter 4).

Evaluating instructional practices in chemistry, mathematics, and physics, necessitates multidisciplinary, large-scale studies of the factors that influence teaching pedagogies. Data from a multi-institution, large-scale survey of postsecondary introductory-level chemistry, mathematics, and physics instructors in the United States was used to model two outcomes: percent lecturing (Chapter 5) and stages of research-based instructional strategy adoption (Chapter 6). These outcomes were modeled using multilevel modeling with contextual, personal, and teacher thinking factors as the explanatory variables. Multilevel models produced explanatory models of instructional practices, providing change agents with productive avenues for improving instructors’ pedagogies.

Share

COinS