Graduation Year

2020

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Educational Measurement and Research

Major Professor

John Ferron, Ph.D.

Committee Member

Robert Dedrick, Ph.D.

Committee Member

Yi-hsin Chen, Ph.D.

Committee Member

Tony Tan, Ed.D.

Keywords

DSEM, number of participants, time points, treatment effect

Abstract

Dynamic structural equation modeling (DSEM) has been proposed to analyze intensive longitudinal data (ILD). This dissertation provided a tutorial for the DSEM model specification practices in the context of intervention designs. Specifically, I illustrated using equations, verbal descriptions, and figures three two-level DSEM models based on three common intervention designs, including a single-arm trial with repeated measurements before and during intervention, randomized control trial (RCT) with repeated measurements during intervention, and RCT with repeated measurements before and during intervention. Mplus syntax and results from the analysis were also provided for each model along with the interpretations and inferences of parameter estimates. In addition, potential extensions, such as treatment effects on autoregressive relationships and residual terms, covariates at different levels, and random residual variance, were discussed.

In the second part of this dissertation, a Monte Carlo simulation study was conducted to examine the sample size requirements (number of individuals = N, number of measurement occasions per individual = T) for the three general DSEM models for intervention research. Sample size requirements considering treatment effects with zero to large effect sizes (i.e., 0, 0.2, 0.5, and 0.8) for each model were investigated. Overall, the relative bias, mean square error (MSE), average standard error/standard deviation of the estimates from the replications (SE/SD) and 95% credible interval coverage performed well across conditions when N increased to 50 in the three models. However, the power to detect the treatment effect highly depended on the magnitude of treatment effect. In all three models with weak effects, there was not adequate power (power < 0.8) for any of the considered sample sizes. When effect size was medium, at least N=50 with T=25 measurement occasions per participant were required for Model 1(single-arm trial with pre-posttests). In general, an unbalanced design requires larger T than balanced design. For Model 2 (RCT with posttests), at least 200 participants with T=10 and 25 for balanced and unbalanced designs were required. Similarly, 200 participants were required for Model 3 (RCT with pre-posttests), and unbalanced design (T≥50) requires more T than balanced design (T≥50). When effect size was large, at least N=25 with T=25 was required for Model 1; N=75 with T=25 and 50 were needed for balanced and unbalanced designs for Model 2, respectively; N=75 with T=50 and 75 were needed for balanced and unbalanced designs for Model 3, respectively. Consistent with previous findings, higher-level sample size (e.g., N) is more beneficial than lower-level sample size (e.g., T) in terms of the quality of parameter estimation. The results indicated that when N is not large enough, increasing T does not necessarily improve the estimation quality. Also, when T already reached a large number, it is not helpful to add more.

Included in

Education Commons

Share

COinS