摘要 :
In multilevel multiple-indicator multiple-cause (MIMIC) models, covariates can interact at the within level, at the between level, or across levels. This study examines the performance of multilevel MIMIC models in estimating and ...
展开
In multilevel multiple-indicator multiple-cause (MIMIC) models, covariates can interact at the within level, at the between level, or across levels. This study examines the performance of multilevel MIMIC models in estimating and detecting the interaction effect of two covariates through a simulation and provides an empirical demonstration of modeling the interaction in multilevel MIMIC models. The design factors include the location of the interaction effect (i.e., between, within, or across levels), cluster number, cluster size, intraclass correlation (ICC) level, magnitude of the interaction effect, and cross-level measurement invariance status. Type I error, power, relative bias, and root mean square of error of the interaction effects are examined. The results showed that multilevel MIMIC models performed well in detecting the interaction effect at the within or across levels. However, when the interaction effect was at the between level, the performance of multilevel MIMIC models depended on the magnitude of the interaction effect, ICC, and sample size, especially cluster number. Overall, cross-level measurement noninvariance did not make a notable impact on the estimation of interaction in the structural part of multilevel MIMIC models when factor loadings were allowed to be different across levels.
收起
摘要 :
This study examined the impact of omitting covariates interaction effect on parameter estimates in multilevel multiple-indicator multiple-cause models as well as the sensitivity of fit indices to model misspecification when the be...
展开
This study examined the impact of omitting covariates interaction effect on parameter estimates in multilevel multiple-indicator multiple-cause models as well as the sensitivity of fit indices to model misspecification when the between-level, within-level, or cross-level interaction effect was left out in the models. The parameter estimates produced in the correct and the misspecified models were compared under varying conditions of cluster number, cluster size, intraclass correlation, and the magnitude of the interaction effect in the population model. Results showed that the two main effects were overestimated by approximately half of the size of the interaction effect, and the between-level factor mean was underestimated. None of comparative fit index, Tucker-Lewis index, root mean square error of approximation, and standardized root mean square residual was sensitive to the omission of the interaction effect. The sensitivity of information criteria varied depending majorly on the magnitude of the omitted interaction, as well as the location of the interaction (i.e., at the between level, within level, or cross level). Implications and recommendations based on the findings were discussed.
收起
摘要 :
Studies comparing groups that are at different levels of multilevel data (namely, cross-level groups) using the same measure are not unusual such as student and teacher agreement in education or congruence between patient and phys...
展开
Studies comparing groups that are at different levels of multilevel data (namely, cross-level groups) using the same measure are not unusual such as student and teacher agreement in education or congruence between patient and physician perceptions in health research. Although establishing measurement invariance (MI) between these groups is important, testing MI is methodologically challenging because the groups compared for MI are at different levels with one group nested within the other group. We propose a multilevel confirmatory factor analysis (CFA) model that allows MI testing between cross-level groups at the between level and demonstrated testing MI between students and teachers using the promoting social interaction scale. Along with the demonstration, some methodological issues in implementing the proposed model (e.g., cluster invariance and reliability) and evaluating the model fit of multilevel CFA (e.g., Delta CFI and level-specific fit indices) and alternative approaches to the proposed model are discussed.
收起
摘要 :
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated t...
展开
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error control and statistical power. Seven factors were manipulated: number of groups, average number of observations per group, pattern of sample sizes in groups, pattern of population variances, maximum variance ratio, population distribution shape, and nominal alpha level for the test of variances. Overall, the Ramsey conditional, O'Brien, Brown-Forsythe, Bootstrap Brown-Forsythe, and Levene with squared deviations tests maintained adequate Type I error control, performing better than the others across all the conditions. The power for each of these five tests was acceptable and the power differences were subtle. Guidelines for selecting a valid test for assessing the tenability of this critical assumption are provided based on average cell size.
收起
摘要 :
With the increasing use of international survey data especially in cross-cultural and multinational studies, establishing measurement invariance (MI) across a large number of groups in a study is essential. Testing MI over many gr...
展开
With the increasing use of international survey data especially in cross-cultural and multinational studies, establishing measurement invariance (MI) across a large number of groups in a study is essential. Testing MI over many groups is methodologically challenging, however. We identified 5 methods for MI testing across many groups (multiple group confirmatory factor analysis, multilevel confirmatory factor analysis, multilevel factor mixture modeling, Bayesian approximate MI testing, and alignment optimization) and explicated the similarities and differences of these approaches in terms of their conceptual models and statistical procedures. A Monte Carlo study was conducted to investigate the efficacy of the 5 methods in detecting measurement noninvariance across many groups using various fit criteria. Generally, the 5 methods showed reasonable performance in identifying the level of invariance if an appropriate fit criterion was used (e.g., Bayesian information criteron with multilevel factor mixture modeling). Finally, general guidelines in selecting an appropriate method are provided.
收起
摘要 :
Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel co...
展开
Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.
收起
摘要 :
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of model...
展开
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and correlated methods model. This study presents the multilevel bifactor approach to handling wording effects of mixed-format scales used in a multilevel context. The Students Confident in Mathematics scale is used to illustrate this approach. Results from comparing a series of models showed that positive and negative wording effects were present at both the within and the between levels. When the wording effects were ignored, the within-level predictive validity of the Students Confident in Mathematics scale was close to that under the multilevel bifactor model. However, at the between level, a lower validity coefficient was observed when ignoring the wording effects. Implications for applied researchers are discussed.
收起
摘要 :
This simulation study examines the efficacy of multilevel factor mixture modeling (ML FMM) for measurement invariance testing across unobserved groups when the groups are at the between level of multilevel data. To this end, laten...
展开
This simulation study examines the efficacy of multilevel factor mixture modeling (ML FMM) for measurement invariance testing across unobserved groups when the groups are at the between level of multilevel data. To this end, latent classes are generated with class-specific item parameters (i.e., factor loading and intercept) across the between-level classes. The efficacy of ML FMM is evaluated in terms of class enumeration, class assignment, and the detection of noninvariance. Various classification criteria such as Akaike's information criterion, Bayesian information criterion, and bootstrap likelihood ratio tests are examined for the correct enumeration of between-level latent classes. For the detection of measurement noninvariance, free and constrained baseline approaches are compared with respect to true positive and false positive rates. This study evidences the adequacy of ML FMM. However, its performance heavily depends on the simulation factors such as the classification criteria, sample size, and the magnitude of noninvariance. Practical guidelines for applied researchers are provided.
收起
摘要 :
We provide reporting guidelines for multilevel factor analysis (MFA) and use these guidelines to systematically review 72 MFA applications in journals across a range of disciplines (e.g., education, health/nursing, management, and...
展开
We provide reporting guidelines for multilevel factor analysis (MFA) and use these guidelines to systematically review 72 MFA applications in journals across a range of disciplines (e.g., education, health/nursing, management, and psychology) published between 1994 and 2014. Results are organized in terms of the (a) characteristics of the MFA application (e.g., construct measured), (b) purpose (e.g., measurement validation), (c) data source (e.g., number of cases at Level 1 and Level 2), (d) statistical approach (e.g., maximum likelihood), and (e) results reported (e.g., intraclass correlations for indicators and latent variables, standardized factor loadings, fit indices). Results from this review have implications for applied researchers interested in expanding their approaches to psychometric analyses and construct validation within a multilevel framework and for methodologists using Monte Carlo methods to explore technical and methodological issues grounded in realistic research design conditions.
收起
摘要 :
We propose the three-step multilevel factor mixture modeling (ML FMM) to test measurement invariance (MI) across many groups and furthermore to model predictors of latent class membership that possibly induce measurement noninvari...
展开
We propose the three-step multilevel factor mixture modeling (ML FMM) to test measurement invariance (MI) across many groups and furthermore to model predictors of latent class membership that possibly induce measurement noninvariance. This Monte Carlo simulation study found that information criteria such as Bayesian Information Criterion tended to select a more complex model when sample size was very large. Thus, the adequacy of three-step ML FMM regarding the correct MI detection was demonstrated with an empirically derived information criterion for large data. However, the number of latent classes was overestimated when intraclass correlation was large. For the test of covariate effects, Type I error was well controlled and power was generally adequate when a correct model was identified at Step 1. Using background variables selected from Trends in International Mathematics and Science Study 2011, the application of three-step ML FMM to a cross-national MI test is demonstrated.
收起