Abstract
Orientation: The rigid application of conventional confirmatory factor analysis (CFA) techniques, the overreliance on global model fit indices and the dismissal of the chi-square statistic appear to have an adverse impact on the research of psychological ownership measures.
Research purpose: The purpose of this study was to explicate the South African Psychological Ownership Questionnaire’s (SAPOS’s) CFA model fit using the Bayesian structural equation modelling (BSEM) technique.
Motivation for the study: The need to conduct this study derived from a renewed awareness of the incorrect use of the chi-square statistic and global fit indices of CFA in social sciences research.
Research approach/design and method: The SAPOS measurement model fit was explicated on two study samples consisting, respectively, of 712 and 254 respondents who worked in various organisations in South Africa. A Bayesian approach to CFA was used to evaluate if local model misspecifications were substantive and justified the rejection of the SAPOS model.
Main findings: The findings suggested that a rejection of the SAPOS measurement model based on the results of the chi-square statistic and global fit indices would be unrealistic and unfounded in terms of substantive test theory.
Practical/managerial implications: BSEM appeared to be a valuable diagnostic tool to pinpoint and evaluate local CFA model misspecifications and their effect on a measurement model.
Contribution/value-add: This study showed the importance of considering local misspecifications rather than only relying the chi-square statistic and global fit indices when evaluating model fit.
Keywords: psychological ownership; Bayesian structural equation modelling; confirmatory factor analysis; CFA model fit indices; CFA model misspecifications; small variance priors.
Introduction
Orientation
The motivation for conducting this study was the realisation that many latent variable measurement models of theoretical constructs published in social sciences journals might be flawed because of deficient model testing (Greiff & Heene, 2017; Hayduk, 2014; Hayduk, Cummings, Boadu, Pazderka-Robinson, & Boulianne, 2007; Ropovik, 2015). Of importance is that deficient model testing has a direct impact on the reproducibility and replicability crises in psychology (Ropovik, 2015). Ropovik (2015) reports that 80% of accepted models may be statistically flawed and that only 3% of researchers inspected the residual matrix for local misspecification. Greiff and Heene (2017) reported that in many studies (more than 60%) the chi-square and global goodness of fit (GoF) indices (e.g. comparative fit index [CFI], Tucker–Lewis index [TLI], root mean square error of approximation [RMSEA] and standardised root mean squared residual [SRMR]) are applied mindlessly without considering the effect of local misspecifications on model fit. Hayduk (2014, p. 1) further suggests that ‘many SEM-based theories and measurement scales will require reassessment if we were to clear the backlogged consequences of previous deficient model testing’. Although Hayduk’s (2014) statement may be considered radical, ignoring a statistical significant chi-square and accepting the GoF index results in confirmatory factor analysis (CFA) analyses without doing local indicator (e.g. item cross-loading and correlated residual), misfit investigations are contrary to the spirit of rigorous scientific investigation and can lead to questionable theoretical deductions (Greiff & Heene, 2017; Hayduk, 2014; Ropovik, 2015). Rules of the thumb associated with GoF indices are at their best preliminary interpretations of model fit (Marsh, Hau, & Wen, 2004). ‘Ultimately the interpretations derived from the local estimated parameters and their defences is [sic] a subjective undertaking that requires researchers to immerse themselves in the data’ (Marsh et al., 2004, p. 321). This much-needed meaningful engagement with the data that goes beyond considering global fit indices and factor loadings may have been a challenge for most researchers and I trust this study will inspire researchers to place more emphasis on the analysis of local misspecifications.
In line with recent publications (Guay, Morin, Litalien, Valois, & Vallerand, 2014; Howard, Gagné, Morin, & Forest, 2018; Marsh et al., 2010; Sánchez-Oliva et al., 2017), I followed a ‘methodological-substantive synergy’ approach (Marsh & Hau, 2007, p. 152) in this study. This approach entails applying methodological developments and innovative statistical tools with enhanced precision to complex substantive issues that result in advancing theory. Instead of merely confirming or disconfirming approximate model fit using GoF indices, this approach allows for an ‘enhanced capability of digging up empirical evidence of problems in models and to look for eventual flaws of the postulated theory in order to improve it’ (Ropovik, 2015, p. 7). As Hayduk et al. (2007, p. 843) put it, our theoretical understandings of models are best enhanced by ‘diagnostic evidence accompanying a model’s failure to fit’.
The domain of psychological ownership (PO) provided the ideal platform for the current study as secondary data from a previously published research measurement model as well as an independent sample for replication purposes were available. Psychological ownership, a subdiscipline of positive psychology, has received much attention in scholarly literature on psychology and management over the past decade (Olckers & Van Zyl, 2017). Empirical research shows that PO relates positively to favourable individual outcomes (e.g. engagement and job satisfaction) and organisational outcomes (e.g. company performance). Consequently, PO is considered an important job resource (Olckers, 2013).
Psychological ownership is a unique concept that can be differentiated from other concepts in positive psychology (Pierce, Kostova, & Dirks, 2001), and it is grounded in the social psychology of material possessions (Belk, 1988; Dittmar, 1992). People who possess PO feel positive about tangible and intangible targets of perceived ownership. Feelings of PO are linked to individuals’ self-concept; therefore, PO elicits a sense of responsibility towards the target of ownership (e.g. space, object, job, project and entity) (Pierce et al., 2001). Psychological ownership can be defined as a self-derived state where individuals feel as though the targets (or objects or parts thereof) of ownership, such as their job or organisation, are theirs (‘that is mine’). The core of PO is the concept of ‘possessiveness’: an individual becomes psychologically tied to a target, and the target becomes an important part of the individual’s identity (‘I am what I have’) (Pierce et al., 2001). The feeling of ownership has important behavioural, emotional and psychological consequences: the individual needs, for example, to protect, care for and nurture the target. By implication, this requires the individual to invest time, resources and abilities in the target (the job, the organisation and its objectives, structures and processes), leading to an increase in productivity levels.
Dawkins, Tian, Newman and Martin (2017) and Avey, Avolio, Crossley and Luthans (2009) argue that although the delineation of the construct of PO has progressed substantially, debates about its theoretical foundation, measurement and factors that influence the construct have by no means been finalised and are ongoing in the literature. More specifically, the measurement models of PO may also be a product of deficient model testing and could inhibit theory development. Van Dyne and Pierce’s (2004) PO measure consists of a single scale of 4–7 items (depending on the version) and Avey et al.’s (2009) PO questionnaire (POQ) consists of 16 items and five subscales, with each scale having 3–4 items. Both these measures are often used in research and produce high GoF indices (CFI > 0.96), and Van Dyne and Pierce’s (2004) measure showed a non-significant chi-square statistic, implying an excellent model fit. According to Marsh et al. (2004), it is counter-productive to shorten measures of psychological constructs for the sole purpose of complying with Hu and Bentler’s (1999) GoF indices criteria (e.g. CFI = 0.95). Hu and Bentler’s GoF indices criteria for CFA are based on simulations studies involving 15 items and three correlated factors. Hu and Bentler (1999) acknowledge that the GoF indices’ cut-off values have limited generalisability and should be used with caution for lengthier measures, different conditions and samples sizes. Even the old cut-off values (e.g. CFI = 0.90) for CFA model fit indices are over-demanding for lengthy psychological measures (Marsh et al., 2004). A scale with only a few indicators (e.g. two to three) per factor poses the danger of representing ‘bloated specifics’ of a much broader construct, which would result in less generalisable findings because of sample fluctuations (Hsu, Skidmore, Li, & Thompson, 2014). This proved to be the case when the POQ was tested in three independent studies in South Africa and resulted in inconsistent findings (Olckers & Van Zyl, 2017). Sound factor analysis solutions require at least 4–6 items per factor to reflect true dimension measurement by item pool (Hair, Black, Babin, & Anderson, 2010).
Given the likely construct validity limitations of Avey et al.’s (2009) POQ and Van Dyne and Pierce’s (2004) PO measure, the South African Psychological Ownership Questionnaire (SAPOS) was developed for use in organisations to measure employees’ PO. Development of the SAPOS followed the principles of Hsu et al. (2014) and Marsh, Lüdtke, Nagengast, Morin, and Von Davier (2013):
[M]ore is never too much’ when considering factor indicators (Marsh et al., 2013, p. 258) and ‘balancing the desire for a parsimonious model with a less parsimonious one in order to adequately reflect the reality, the researcher purports to represent. (Hsu et al., 2014, p. 150)
The SAPOS consists of 35 items and four subscales, with 5–16 items per scale. Olckers (2013), in developing the SAPOS, relied predominantly on GoF indices and significant target factor loadings to conclude that the measurement model represented the data reasonably well (CFI = 0.90, RMSEA = 0.05). However, the CFI fit index was below the ‘golden standard’ of good model fit (CFI ≥ 0.95), and the chi-square statistic was significant (chi-square = [554] 951.772, p < 0.0001), showing an unacceptable model fit. Olckers (2013) acknowledged that the chi-square statistic showed a poor model fit but argued that the statistic was too sensitive to sample size, whereupon the GoF indices were relied on to determine the model fit. Unfortunately, the CFI, RMSEA and SRMR cut-off values can be notoriously inaccurate and inconsistent when analysing more lengthy and complex measures such as the SAPOS, as the cumulative effects of small cross-loadings and correlated residuals are inclined to adversely affect the model fit statistics (Heene, Hilbert, Harald Freudenthaler, & Bühner, 2012; Lai & Green, 2016; Perry, Nicholls, Clough, & Crust, 2015). No post hoc model adjustments (e.g. correlated residuals) were done in the CFA analyses to increase the model fit (Marsh et al., 2009). In her study, Olckers (2013) did not report investigating the parameter misspecifications at the local indicator level as signified by a statistical significant chi-square. Therefore, any deductions about the plausibility of the CFA measurement model solely based on GoF indices and the factor loadings may be considered preliminary and inconclusive (Marsh et al., 2004).
In summary, probabilistic issues in the assessment of the plausibility of measurement models in the social sciences, and more specifically PO, include rigidly applying CFA techniques (Asparouhov, Muthén, & Morin, 2015), over-relying on GoF indices (Greiff & Heene, 2017; Ropovik, 2015; Saris, Satorra, & Van der Veld, 2009), ignoring the chi-square statistic (Barrett, 2007; Hayduk, 2014; Saris et al., 2009), not considering the effect of inexact factor indicators from questionnaire data on model fit (Asparouhov et al., 2015) and not sufficiently examining parameter misspecifications at the local indicator level (Greiff & Heene, 2017; Hayduk, 2014; Ropovik, 2015).
Research objectives
This study sought to point out the challenges that come with the rigid application of CFA techniques, the problem of overly generalised CFA model fit indices and the impact of imperfect latent factor indicators on measurement models. More specifically, its aim was to gain new insights into the plausibility of a multidimensional SAPOS measurement model through more flexible and meaningful engagement with the data and the factoring in of substantive test theory principles. In this study, I went beyond evaluating global fit indices and target factor loadings to the explicating of CFA model misfit. I demonstrated the value of Bayesian structural equation modelling (BSEM) as a diagnostic tool for CFA model misfit challenges in the assessment of the more lengthy and multidimensional SAPOS measurement model (Asparouhov et al., 2015).
Bayesian structural equation modelling allows for the analysis of simultaneous and cumulative effects of small cross-loadings and correlated residuals in the CFA measurement model for SAPOS, which could lead to a better understanding of the reasons for model misfit and help to evaluate whether misfit could be considered substantive or not. According to Asparouhov and Muthén (2017:2):
BSEM can be used to parse out meaningful model misspecifications from small model misspecifications that can also be the cause of model rejections when such small misspecifications are in great number, or sample size is so large that even small misspecifications are enough to reject the model.
Research question
The research question for this study is whether item cross-loadings and correlated residuals for the SAPOS measurement model signify significant and substantive parameter misspecifications and whether these justify the rejection of the CFA model. The use of BSEM as a diagnostic tool for studying the significance and substantiveness of model misspecification at a local parameter level is demonstrated.
In the following sections, a critical review of CFA model misspecification and misfit is given and the role of BSEM in diagnosing CFA misfit is clarified. Brief overviews of BSEM as a small-sample factor analysis technique and of the SAPOS measurement model are also presented.
Literature review
Confirmatory factor analysis model misspecification and misfit
Simulation studies have shown that the GoF indices’ cut-off values suggested by Hu and Bentler (1999) cannot be generalised to different CFAs and hypothesised latent structures and that these values should be used with caution (Greiff & Heene, 2017; Perry et al., 2015; Ropovik, 2015). The sensitivity of the universal cut-off values of the chi-square test statistic and GoF indices (e.g. RMSEA = 0.05, CFI = 0.95) to detect model misspecifications or Type I error rate is dependent on multiple factors, which include model type, size of the covariance matrix, violations from multivariate normality, sample size, factor-loading magnitude and reliability. For example, it has been shown that the behaviour of GoF indices is highly unpredictable in the presence of severe misspecifications and that the probability of correctly rejecting misspecified models systematically decreases with increasing sample size (Marsh et al., 2004) and decreasing indicator reliability (McNeish, An, & Hancock, 2018).
Contrary to the behaviour of GoF indices, the maximum likelihood (ML) ratio chi-square value has been shown to be overly sensitive in detecting minor misspecifications with increases in sample size, indicator reliability, communalities, deviations from multivariate normality, the size of the covariance matrix and model complexity (Heene, Hilbert, Draxler, Ziegler, & Bühner, 2011), and therefore the chi-square statistics are likely to be ignored by researchers. However, a significant ML chi-square could also point to large misspecifications, and ignoring this would be to erroneously accept an ill-informed theoretical model (Greiff & Heene, 2017; Ropovik, 2015). Irrespective of ML chi-square’s limitations, it can be considered the only valid statistical test in structural equation modelling (SEM) that tests the null hypothesis that there is no significant difference between the model-implied covariance matrix and the observed covariance matrix (Ropovik, 2015). Therefore, a significant ML chi-square should always lead to the investigation of local model parameter estimates, such as item cross-loadings and residual correlates. Misspecified correlated residuals in measurement models are of special concern in the social sciences; they are likely to occur because of similar item wording, logical item dependencies, overlapping of item content, omitted factors, over-factoring (two factors instead of one), nuisance factors and unstable factors (Asparouhov et al., 2015; Heene et al., 2012). Misspecified correlated residuals can falsify theoretical assumptions, impact reliability estimates and bias criterion-related studies (Heene et al., 2012). Other indicators of model misspecification may include convergence problems, unlikely values for estimated parameters, inflated standard errors, collinearity, insignificant or negative residual variances, standardised factor loadings exceeding the -1 to 1 interval and model under-identification because of near zero item inter-correlations (Ropovik, 2015). Researchers must be cautioned against accepting CFA models solely based on GoF indices showing adequate fit without a careful inspection of the substantiveness of misspecifications signified by a statistically significant ML chi-square value (Greiff & Heene, 2017; Ropovik, 2015). Just one significant misspecification may be needed to distort the parameter estimates of the entire model without the GoF indices being sensitive to the distortion (Ropovik, 2015).
Bayesian structural equation modelling for inspecting confirmatory factor analysis model misspecifications
Bayesian structural equation modelling is a viable option for inspecting the substantiveness of misspecifications signified by a statistically significant ML chi-square value. It has rightfully been argued that the ML chi-square statistic used in CFA modelling of substantive theory applies unnecessarily strict criteria for model fit. Confirmatory factor analysis models that assume zero item cross-loadings and zero residual correlations in factor analysis are considered unrealistic when testing the theory underlying multidimensional behavioural measurement models (Asparouhov et al., 2015). Well-defined and useful factor structures do not necessarily contain pure items that only load on single factors (Marsh et al., 2009; Sass & Schmitt, 2010). Unidimensionality and pure factor indicators are a noble ideal in theory but this ideal is rarely achieved with real data in the social sciences (Marsh et al., 2013). The BSEM approach is intended to reflect substantive test theory by replacing exact zeros on item cross-loadings and correlated residuals with approximate zeros through specifying small variance priors (Asparouhov et al., 2015; Asparouhov & Muthén, 2017). Freeing all these parameters simultaneously in a conventional CFA would lead to a non-identifiable model (Muthén & Asparouhov, 2012). Subsequently, it is not possible to improve a CFA model fit without subsuming the strongly criticised data-driven approach of freeing parameters sequentially. In contrast, BSEM informs model modification when all parameters (item cross-loadings and correlated residuals) with small informative priors are freed simultaneously in a single step. With BSEM analyses and small variance priors it is possible to evaluate the substantiveness of parameter misspecifications and the effect thereof on a measurement model. Significant misspecifications of around 0.20 for cross-loadings and correlated residuals can be considered noticeable and of some importance, and a value of around 0.30 is considered important in terms of substantive classical test theory (Muthén & Asparouhov, 2012). The BSEM technique allows for the identification of large, isolated residual correlates that can be considered for inclusion in the CFA model to improve model fit and identify missing or additional factors (multiple substantive correlated residuals are visible) (Heene et al., 2012), over-factoring (correlation between two factors is high [r < 0.90] with the inclusion of substantive correlated residuals for two or more indicators), unstable factors (with the inclusion of correlated residuals the factor disappears), item redundancy (repetitive or parallel item wording in factors showing substantive correlated residuals), nuisance factors and irrelevant noise because of the imperfect nature of factor indicators (multiple non-substantive correlated residuals) (Muthén & Asparouhov, 2012, pp. 7–9).
The BSEM technique has, however, been criticised by Stromeyer, Miller, Sriramachandramurthy and DeMartino (2015). Their view is conventional and they have strong reservations about the principle of allowing minor cross-loadings and correlated residuals in measurement model testing. In reaction to this criticism, Asparouhov et al. (2015) have made a strong case, supported by simulations and case studies, to prove that BSEM is well-entrenched in the principles of classical test theory and provides the applied researcher with a unique and valuable technique to study measurement models. Bayesian structural equation modelling is not solely intended to confirm or reject the CFA model, but should serve as a diagnostic tool to pinpoint and evaluate model misspecifications and generate ideas about modifications that can provide better CFA model fit (Asparouhov et al., 2015). Although reflective techniques on model misspecifications and modifications are ample in the frequentist statistical paradigm (Ropovik, 2015; Saris et al., 2009), and could even imitate BSEM (Rindskopf, 2012), BSEM’s strength lies in allowing all parameters to be estimated simultaneously and to have small variance priors to evaluate the effect of the substantiveness of parameter misspecifications on a measurement model. In addition, the simulation methods used by the BSEM allow for empirical percentiles (95% probability levels) of the posterior distributions of factor loadings and correlated residuals, whereas commonly implemented frequentist SEM techniques are not based on simulations (Rindskopf, 2012).
Bayesian structural equation modelling and sample size
According to Muthén and Asparouhov (2012, p. 323), ‘BSEM avoids small-sample-size inflation of ML chi-square statistics and the statistics’ sensitivity to rejecting models with an ignorable degree of misspecification’. Through specifying informative small variance priors, model testing is carried out using posterior predictive (PP) checking, which is less sensitive than the ML chi-square statistic to ignorable degrees of misspecification (Muthén & Asparouhov, 2012). With informative priors, the BSEM analysis is based on more information than is strictly provided in the data, which, in the case of smaller sample sizes, has a more pronounced impact on the accuracy of estimates and statistical power (McNeish, 2016). In contrast, non-informative or diffuse priors allow ‘the data to do the talking’, and BSEM and CFA provide similar results with large samples (e.g. N = 1000) (McNeish, 2016).
Moreover, using BSEM with diffuse priors can be highly problematic when applied to small samples, leading to biased parameter estimates and insufficient statistical power (McNeish, 2016). Information about the most suitable priors can be deduced from substantive theory and empirical studies that are conducted on a measurement model (Muthén & Asparouhov, 2012). It is, however, important that informative priors that contribute to posterior parameter estimates should be accurate with small samples; if not, the resulting estimates could be biased. However, research has shown that with the Markov chain Monte Carlo (MCMC) estimation and the Gibbs sampler, which form part of the BSEM analysis intended for this study, priors with a fairly large variance could still produce accurate posterior parameter estimates for smaller samples (McNeish, 2016). Using simulations studies, Muthén and Asparouhov (2012) have shown that BSEM with non-informative priors has sufficient power to produce accurate factor loadings exceeding 0.30 with samples over n = 200.
The South African Psychological Ownership Questionnaire measurement model
The theory and the details of the constructs are described by Olckers (2013). However, for the purposes of this article and the convenience of the readers, the process followed in developing the SAPOS and the model’s key constructs are described briefly.
The first version of the SAPOS (Olckers, 2013) comprises 69 items covering seven constructs: self-efficacy (a person’s beliefs about his or her own ability to accomplish tasks, the person’s control over outcomes and his or her sense of ownership); self-identity (a personal cognitive connection between an individual and an object or target, reflecting a perception of oneness with the target of possession); autonomy (self-regulated influence and control over objects, possession and ownership); responsibility (feeling of responsibility for the target of ownership and the implicit right to control, protect and maintain it); accountability (perceived right to hold others and oneself accountable for influences on one’s target of ownership); belongingness (extent to which individuals feel attached to the place of work or feel ‘at home’); and territoriality (extent to which individuals are preoccupied with a territory or target as their own and do not want to readily share it with others). The measure was content validated by nine subject matter experts in the field of positive psychology and measurement. Factor analysis was conducted on a diverse sample of 713 highly skilled and skilled employees from the private and public sectors. The sample was randomly split into two for the exploratory factor analysis (EFA) and CFA analysis. The EFA analysis, using half of the sample (n = 354), suggested the use of a four-factor model (with 35 items) consisting of identity (with self-identity and belongingness items), responsibility (with self-efficacy and accountability items), autonomy and territoriality. A significant number of items that cross-loaded or did not load significantly onto the target variable were excluded from the final model in the EFA analysis. The CFA model using the second half of the sample (n = 356) is presented in Figure 1. According to Olckers (2013), the four-factor model captures the essence of PO ownership as reflected in the literature. Unfortunately, the big increase in model parsimony going from EFA to CFA often leads to an ill-fitting CFA model. Bayesian structural equation modelling is confirmatory in nature and has the flexibility to avoid the big increase in parsimony going from EFA to CFA (Muthén & Asparouhov, 2012, p. 332).
|
FIGURE 1: The South African Psychological Ownership Questionnaire measurement model: Standardised parameter estimates for latent construct correlates, indicator (Q) and error (E) values. |
|
The present study employed BSEM to explicate the SAPOS CFA measurement model fit using the original total sample from Olckers’ study and to test and explicate the CFA model fit on a new independent sample. More specifically, this study used the BSEM analysis to inspect the substantiveness of the local misspecifications signified by a statistical significant chi-square on the measurement model of the SAPOS for two independent study samples.
Research method
Research participants
A cross-sectional survey research design was followed in this study. Participants were recruited through non-probability purposive samples of mostly professional-level employees from various organisations in both the private and public sectors in South Africa. Sample 1 was the data set that Olckers (2013) used in her study. I used the data to assess the substantiveness of the misspecifications on the local parameter estimates of the CFA model. Sample 1 had no missing values. Sample 2 was not from Olckers’ (2013) study and the missing values were randomly dispersed and all were included in the data set. Sample 2 was used to replicate the enquiry processes followed in Sample 1. Influential outliers were identified using Mahalanobis, LogLikelihood, Influence and Cooks distance tests. Excluding the most influential outliers from the data sets had a negligible effect on the ML robust model fit statistics and parameter estimates for the CFA models. Both data sets with outlier cases were consequently retained.
Sample 1 consisted of the data used for the initial development of the SAPOS (Olckers, 2013). For the purpose of this study, the data sets of the two samples used, respectively, for EFA and CFA were combined. The total sample consisted of 712 respondents of which 41% were men and 49% were women. Of the sample, 60% were white respondents and 32% were Africans. The average age of the respondents was 40 years. Approximately 91% of the sample had obtained a tertiary education. Of the respondents, 68% functioned on a managerial level. In terms of employment tenure, 44% had been working in their current organisation for a period of less than 5 years and the remainder (56%) had been employed for more than 5 years.
Sample 2 was newly acquired and consisted of 254 respondents of which 66% were men and 34% were women. Of the sample, 54% were white respondents and 36% were Africans. The average age of the respondents was 39 years. Approximately 53% of the sample had obtained a tertiary education. Of the respondents, 82% functioned on a managerial level. In terms of employment tenure, 24% had been working in their current organisation for a period of less than 5 years and the remainder (76%) had been employed for more than 5 years.
Measure
The SAPOS consists of 35 items and represents four factors: self-identity, responsibility, autonomy and territoriality. The original item numbering reported in the study conducted by Olckers (2013, p. 10) was retained for the purposes of this study. Each item was scored on a six-point Likert-type rating scale (1 = strongly disagree; 6 = strongly agree). Satisfactory reliability coefficients of 0.94 for the identity subscale, 0.87 for both the responsibility and autonomy subscales and 0.78 for the territoriality subscale were reported (Olckers, 2013).
Procedure and ethical considerations
Participants from several organisations completed the questionnaire in their personal capacity and they gave their informed consent. The purpose of the research was explained to the respondents, and participation in the survey was voluntary. Data were collected by means of an electronic self-administered questionnaire or hard copies of the questionnaire that were distributed. The confidentiality and anonymity of the respondents were respected at all times.
Analytical approach
A CFA factor analysis was used to test for the SAPOS measurement model for each of the samples, followed by BSEM to investigate the parameters that might have been misspecified, contributing to a significant chi-square value and consequently model rejection (Asparouhov et al., 2015). The MPlus Statistical Software Version 8.3 and the ML estimation method with robust standard errors (MLR) were used to conduct the CFA (Muthén & Muthén, 2017). The MLR can be applied effectively in the absence of multivariate normality assumption associated with self-report Likert measurement scales, which are essentially categorical and ordinal in nature (Schmitt, 2011). I relied on the highly recommended full-information ML and Bayesian estimators (a default option in MPlus 8 that, in accordance with the missing data theory, estimates models using all available data) to effectively account for missing data (Muthén & Muthén, 2017). Overall, 1.5% of data points out of the total data set were missing at random for Sample 2 and no data were missing for Sample 1.
The BSEM analysis allows for models to be modified progressively from a CFA model using a Bayes estimator to a full BSEM model that includes parameters with cross-loadings and correlated residuals (Asparouhov et al., 2015). More details on the BSEM analysis process followed in this study can be found in Asparouhov et al. (2015, pp. 14–15). With BSEM analysis, fixed-to-zero parameters convert to an approximate fixed-to-zero parameter as the analysis progresses. Bayesian structural equation modelling analysis allows for preserving the CFA model while allowing the evidence in the data to drive parameters away from zero where such evidence exists. These results may then be used to evaluate the discrepancies between a hypothesised CFA model and the data (Asparouhov et al., 2015).
The first model (Model 1) consisted of a CFA model that did not specify cross-loading or correlated residuals: by implication all the parameters were fixed to zero. The second model (Model 2) consisted of a CFA model specifying small variance priors, resulting in normally distributed non-zero cross-loadings which could vary between being important (λ < 0.30) and being near zero. The third model (Model 3) consisted of the cross-loading priors specified in Model 2 and additional potentially misspecified correlated residual parameters with small variances priors around zero.
Model 1 was obtained by running a CFA with a Bayes estimator with a diffuse or non-restricted variance prior setting that would equate the analyses with those of an ML CFA model.
Model 2 was obtained by using sensitivity analyses of at least five runs, starting with normal distribution zero (0) priors with extremely small variances (0.001), depicted in MPlus as N (0, 0.001). The priors’ variances systematically increased with each run as follows: N (0, 0.005), N (0, 0.01), N (0, 0.015), N (0, 0.02) and N (0, 0.025). The effect of the varying small variance priors for the factor cross-loadings on the measurement model fit was tested using posterior predictive p values (PPP values). Stable or diminishing returns in the difference of the chi-square values for the observed and the replicated data at a 95% confidence interval (lower 2.5% PP limit and upper 97.5% PP limit) signify that much of the model fit improvement is already gained at these prior levels and no further gains can be expected (Asparouhov & Muthén, 2017, p. 14). The prior posterior predictive p-value (PPPP) was used to evaluate the plausibility of the small variance priors specified for the factor cross-loadings. A PPPP value exceeding 0.05 would mean that the small variance priors specified for the model were supported by the data (Asparouhov & Muthén, 2017). The small variance priors for the factor cross-loadings selected for Model 2 were included in Model 3.
Model 3 was obtained using the diagonal residual covariance matrix (θ) of the CFA model. The prior for the θ matrix is set as an inverse Wishart prior θ~IW(Dd,d) for each parameter, where d is the degrees of freedom and D is the residuals from the CFA in Model 1 (see detailed formulas in Asparouhov et al., 2015, p. 5). As d increases, the prior variances for all parameters converge to zero and are equivalent to those of a CFA model with correlated residual parameters fixed to zero. Thus, with a large d the estimated BSEM model will be equivalent to the CFA model and will produce a PPP of zero (PPP = 0). Therefore, the model will be rejected, as will be the case with a CFA model that has a significant ML chi-square value. By reducing d, small variance priors can be added to the CFA model, resulting in a more flexible model. The data will determine if small correlated residual parameters are needed to obtain model fit. While keeping Model 2’s selected cross-loading priors intact, the inverse Wishart priors are varied systematically in a sensitivity analysis to produce a PPP that marginally exceeds the value of 0.05 while sustaining fast convergence. An ad hoc iterative process was followed in the sensitivity analysis with at least five iterations to obtain the required model. Based on Asparouhov et al.’s (2015, p. 6) recommendations, the starting value d was varied according to the sample that was analysed (e.g. d = 150 for N ± 254, d = 350 for N ± 700). According to Asparouhov and Muthén (2017, p. 10), the PPPP value as implemented in MPlus only tests for minor parameters on factor loadings (λ) and is not applicable to the small correlated residual parameters tested in Model 3. Nevertheless, the inclusion of small variance correlated residuals in the model will be inclined to shrink the cross-loadings in the model and could affect the PPPP value.
The BSEM estimations were done with four independent MCMC chains using the Gibbs sampler. Model convergence was assessed using the potential scale reduction (PSR) factor diagnostic as well as the Kolmogorov–Smirnov test (K–S test) and a visual inspection of the parameter trace and density plots. Convergence was assumed when the PSR value was below or close to 1.05 and the quality and density of the trace plots suggested sufficient coverage and mixing of the chains. All model tests were started with 50 000 iterations, and, if satisfactory convergence was not obtained, the iterations were increased twofold until satisfactory convergence was obtained. Model fit was evaluated using PPP. The PPP value is defined as the proportion of the chi-square values of the simulated or replicated data that exceeds that of the observed data. A low PPP (< 0.05) indicates poor model fit, whereas PPP values of around 0.50 indicate very good fit (Muthén & Asparouhov, 2012). However, in this analysis, the objective was to achieve a PPP value slightly exceeding 0.05. A model that shows such a PPP value would be the model of interest as it is considered the BSEM model closest to the CFA model that fits well (chi-square of p > 0.05) and resolves all the CFA model’s misfits (Asparouhov et al., 2015, p. 6).
In accordance with the recommendations of Dunn, Baguley and Brunsden (2014), McDonald’s ω (omega) was used to estimate the scale internal consistency or reliability coefficient to overcome the limitations associated with the alpha reliability coefficient.
The SAPOS variables were standardised to form a uniform metric with a standard deviation of 1 and a mean of 0 to ensure that the scale does not interfere with the prior settings.
Ethical consideration
Ethical clearance for the research was obtained from the Ethics Committee of the Faculty of Economics and Management Sciences at the relevant university. Individuals from several organisations took part in the research in their personal capacity and they gave their informed consent for participation. The purpose of the research was explained to the respondents and participation in the survey was voluntary. Data were collected by through electronic self-administered questionnaire or hard copies of the questionnaire that were distributed. The confidentiality and anonymity of the respondents were respected at all times.
Results
Descriptive statistics for Sample 1 showed a mean item skewness of -0.96 and varied between -1.59 and 0.35. The kurtoses mean was 1.49 and varied between -1.19 and 6.24. Sample 2 showed a mean item skewness of -1.34 and varied between -2.58 and 0.16. The kurtoses mean was 2.37 and varied between -1.22 and 10.68. The ML robust and Bayesian estimators used in this study are known to be effective for non-normal distributions.
The CFA fit statistics for Sample 1, using a robust chi-square statistic, were as follows: χ2 (554) = 1453.844*, RMSEA = 0.048 (90% confidence interval [CI]: 0.048–0.051), CFI = 0.903, TLI = 0.896 and SRMR = 0.050. In terms of the chi-square statistics’ results, the model should be rejected, but in terms of the GoF indices it could be interpreted as a marginal fit. There were no convergence problems, and all estimated parameters, standard errors, collinearity and residual variances that could have influenced the model fit statistics were checked for signs of abnormality.
The results of the BSEM analysis conducted on Sample 1 are presented in Table 1. With respect to samples 1 and 2, model convergence was tested after 50 000 iterations, and PSR convergence (1.05) and trace plots suggested acceptable convergence levels for models 1, 2 and 3 (see Figure 2 and Figure 3 for examples of trace and density plots for question 22 loading onto the latent variable of territoriality for Model 3). The trace plot of the four chains should show clear mixing and the density plot should show a smooth normal distribution, suggesting acceptable convergence.
|
FIGURE 2: Trace plot for question 22 parameter loading on the latent variable of territoriality (Sample 1). |
|
|
FIGURE 3: Density function for question 22 parameter loading on territoriality (Sample 1). |
|
TABLE 1: Bayesian structural equation modelling model fit statistics for Sample 1. |
With respect to Sample 1, the BSEM model fit indices for Model 1 showed inadequate fit (PPP < 0.05) in a way that agrees with the ML CFA model fit. Model 2 also showed inadequate fit (PPP < 0.05); however, the lower (PP limit = 2.5%) and upper (PP limit = 97.5%) confidence levels showed that at a 95% confidence interval, the difference between the observed and the replicated chi-square values had improved compared to Model 1. Model 2 also showed inadequate fit (PPP < 0.05), and the relative stable PPP difference values across the models tested suggested that the varying small cross-loading priors were insufficient to obtain an overall acceptable model fit. The PPPP value of 0.49 that was obtained in the sensitivity analysis for small variance priors N (0; 0.015) represented a good fit (PPPP > 0.05) and consequently was selected for inclusion in Model 3. According to Muthén and Asparouhov (2012, p. 317), a prior with a mean of zero and a variance of 0.015 should provide a maximum cross-loading at a 95% interval of approximately 0.24. I assumed that the cross-loadings would not exceed 0.30, which, in accordance with classical test theory, would represent a significant loading (Sass & Schmitt, 2010; Thurstone, 1947). Significant misspecifications of around 0.20 can be considered noticeable and of some importance, and 0.30 is considered important in terms of substantive classical test theory (Muthén & Asparouhov, 2012). Model 3 was identified as having a sufficient PPP value of 0.09 (PPP > 0.05) and an inverse Wishart prior of d = 500; therefore, the model can be interpreted as a CFA model that fits the data sufficiently well (Asparouhov et al., 2015, pp. 7, 12). The data showed that the model misfit could be ascribed to random noise (white noise) caused by minor correlated residuals, with 98% of the values less than 0.10 (see Figure 4) (Asparouhov & Muthén, 2017).
Only approximately 2% of correlated residuals were not within the range of 0.10 to -0.10, with the largest value being -0.13. In total, 50 of 595 correlated residuals were significant (95% credibility interval that does not contain zero), none of which can be considered substantive. Values around 0.20 can be considered as substantive and ‘statistically significant’, which means that there is a 95% credibility interval (Asparouhov et al., 2015, p. 7). The BSEM factor loadings and the cross-loadings for Model 3 and Sample 1 are presented in Table 2. The results suggested a well-defined factor model with strong target loadings (average = 0.69, minimum = 0.48, maximum = 0.95) and small and non-substantial cross-loadings (average = 0.006, range = -0.16 to 0.16). The results further suggested that the significant chi-square statistic, based on which the ML CFA model was rejected, was because of the accumulated effect of small residual correlations or white noise and that this model should actually be considered a good approximation of the data (Asparouhov et al., 2015).
TABLE 2: Bayesian structural equation modelling factor loadings for samples 1 and 2 of Model 3. |
The CFA fit statistics for Sample 2, using a robust chi-square statistic, were as follows: χ2 (554) = 1075.83*, RMSEA = 0.06 (90% CI: 0.055–0.066), CFI = 0.846, TLI = 0.835 and SRMR = 0.066. In terms of the chi-square statistics results, the model should be rejected, and in terms of the GoF indices it could also be interpreted as a poor fit. Confirmatory factor analysis model fit indices are inclined to penalise complex models when fit is sought for smaller samples, and the likelihood of biased parameter estimates increases significantly (Wolf, Harrington, Clark, & Miller, 2013). There were no convergence problems, and all estimated parameters, standard errors, collinearity and residual variances that could have influenced the model fit statistics were checked for signs of abnormality.
With respect to Sample 2, the BSEM model fit indices (see Table 3) for Model 1 showed inadequate fit (PPP < 0.05) in a way that agrees with the ML CFA model. Model 2 also showed inadequate fit (PPP < 0.05), and the relative stable PPP difference values across the models tested suggested that the varying small cross-loading priors were insufficient to obtain an overall acceptable model fit. However, the PPPP values of 0.08, 0.32 and 0.62 obtained from the sensitivity analysis conducted supported small cross-loading priors of 0; 0.02, 0; 0.025 and 0; 0.03, respectively. In line with the BSEM framework of Asparouhov et al. (2015), the model with cross-loadings that closely approximated zero and showed a convincing PPPP value was selected for inclusion in Model 3. A prior with a mean of zero and a variance of 0.025 should provide a maximum cross-loading at a 95% interval of approximately 0.31 (Muthén & Asparouhov, 2012, p. 317). Model 3 was identified as having a sufficient PPP value of 0.13 (PPP > 0.05); therefore, this model can be interpreted as having a sufficiently good fit. The data showed that the CFA model misfit could be ascribed mostly to white noise caused by minor correlated residuals. The results showed that 96.7% of the values were within the range of -0.10 to 0.10 (see Figure 5), that only approximately 3.3% of the correlated residuals were between the values of 0.10 and 0.162 and that only one value exceeded 0.162 at 0.21. In total, 18 of the 595 correlated residuals were significant (95% credibility interval that does not contain zero), of which one could be considered substantive (r = 0.21). This value represented the correlated residuals for questions 55 and 56 (Q55 and Q56) of the questionnaire.
TABLE 3: Bayesian structural equation modelling model fit statistics for Sample 2. |
Included in Table 2 is a presentation of the BSEM factor loadings and cross-loadings for Model 3 and Sample 2. The results suggested a well-defined factor model with strong target loadings (average = 0.67, minimum = 0.38, maximum = 0.93), 98% non-substantive (< 0.20) cross-loadings (average cross-loadings = 0.011, range = -0.14 to 0.23). The significant and substantive cross-loadings of 0.23 and 0.22 on the latent variable of responsibility involved Q9 and Q61, respectively. Considering that the items formed part of a more lengthy subscale and showed small cross-loadings in terms of generally accepted criteria for saliency or importance (λ < 0.30), the effect on the structure of the overall measurement model should be minor.
The pattern of inter-correlations between the four latent variables of the SAPOS (see the BSEM model’s values below the diagonal line in Table 4) suggests well-delineated constructs with sufficient discrimination power for both samples 1 and 2. There was no evidence of over-factoring. Over-factoring is associated with high correlations (e.g. r > 0.90) between latent variables after the inclusion of the small variance correlated residuals in the model (Asparouhov et al., 2015). Item cross-loadings as low as 0.13 could lead to substantially inflated (biased) target factor loadings and factor inter-correlations when forced into the CFA model (Hsu et al., 2014). When comparing the CFA model’s factor inter-correlations (see Table 4, above the diagonal) with the BSEM model’s factor inter-correlations (see Table 4, below the diagonal), the differences are negligibly small for both samples. Therefore, it can be concluded that model misspecification attributed to the item cross-loadings in the CFA model did not substantially bias the factor inter-correlations for both samples. The relatively higher inter-correlations obtained for the latent variables of identity, responsibility and autonomy and the low correlations obtained for the latent variable of territoriality for both of the samples are supported by PO theory (Olckers, 2013).
TABLE 4: Inter-correlation (covariance) matrix for the latent variables as produced by the confirmatory factor analysis and Bayesian structural equation modelling models. |
The omega reliability coefficients reported in Table 4 (see greyscale cell values on the diagonal line) for the SAPOS appear to be of an acceptable magnitude for both Samples 1 and 2 (Dunn et al., 2014). Reise, Bonifay and Haviland (2013) proposed a value close to 0.75 or higher as the preferred value.
In summary, the results suggested that the rejection of the ML CFA model might have been partly because of small cross-loadings but particularly because of the accumulated effect of small and random residual correlations, and that the model should, therefore, be considered a good approximation of the data.
Discussion
Outline of results
The purpose of this study was to explicate the SAPOS’s CFA measurement model fit using BSEM, a methodology that has only recently been adopted by researchers in the field (De Beer & Bianchi, 2017; Dombrowski, Golay, McGill, & Canivez, 2018; Reis, 2017). The current study is highly relevant given the increased awareness of the incorrect use of CFA model fit indices and of the negative implications that ill-defined measurement models can have on research findings in the social sciences (Greiff & Heene, 2017; Perry et al., 2015). It has been a common practice to blindly reject the important ML chi-square statistical result on the basis that it is an oversensitive indicator of model misspecification (Barrett, 2007; Ropovik, 2015). Considering that the GoF indices, which researchers have strongly relied on, have been shown to be unreliable, the implications of ignoring the only reliable statistic (i.e. the chi-square statistic) may have had dire consequences for the validity of PO research findings (Barrett, 2007). Based on the aforementioned, there have been renewed calls for more in-depth analyses of local parameter misspecifications of measurement models (Greiff & Heene, 2017; Hayduk, 2014; Heene et al., 2012; Ropovik, 2015). In the current study, BSEM proved to be a valuable diagnostic tool for studying the significance and substantiveness of model misspecification at a local parameter level.
The results of the current study show that the SAPOS measurement model is supported by the data obtained, and that the significant ML chi-square obtained for both samples can to a large extent be ascribed to the effect of random noise on the correlated residuals and small cross-loadings. The fact that the powerful new PPPP statistic for small variance priors has not rejected the assumption that the parameter cross-loadings are non-substantive and near zero further supports the notion that model misfit can be ascribed to the effect of random noise (Asparouhov & Muthén, 2017). In addition, the non-rejected PPP values in Model 3 provide strong support for the notion that most of the misfit in the ML CFA model can be attributed to the accumulated effect of a large number of minor residual correlation misspecifications (Asparouhov et al., 2015). There would therefore be no reason to shorten the scales or to subsume the strongly criticised data-driven approach of freeing correlated residuals sequentially for the sole purpose of improving the CFA model fit. This study has shown that the overgeneralised CFA’s GoF indices’ cut-off values would have incorrectly led to the erroneous rejection of the SAPOS measurement model, especially with respect to the smaller sample, while the data actually represented the model reasonably well. More specifically, Sample 2 has shown two small but substantive cross-loadings. Moreover, it appears that there is a logical explanation for these ambivalences as the overlap between the variables of responsibility and identity is strong overall (see Table 4). The overlap makes theoretical sense as the constructs are related (Olckers & Van Zyl, 2017). The shared variances of these constructs lead to some level of construct-relevant association with the items that cross-load. Morin, Katrin Arens and Marsh (2015, p. 20) state that ‘factors are specified to influence the indicators rather than the reverse’. Therefore, a small cross-loading that is in line with theoretical expectations is seen as a shared construct-relevant variance that supports the nature of and does not taint the construct. Furthermore, the CFA model for the SAPOS showed negligible bias in the factor inter-correlations because of misspecified factor loadings (cross-loadings) for both samples.
The results show that many minor correlated residuals account for most of the model misfits in the ML CFA model of the SAPOS. One noticeable correlated residual on the SAPOS is for Q55 and Q56 (Q55: ‘I feel secure in this organisation’ and Q56: ‘I feel that I have common interests with my organisation that are stronger than our differences’). The existence of isolated correlated residuals that are substantive can be ascribed to methodological artefacts resulting in variance unrelated to the construct, such as negative wording, adjacency effects, question order, parallel wording and similar contexts (De Beer & Bianchi, 2017; Reis, 2017). While, multiple and substantive correlated residuals may be a sign of additional factors not modelled (Heene et al., 2012), the exact reason for the correlated residual variance in Q55 and Q56 is not clear; however, adjacent effects or question order may to some extent explain this variance as the questions follow each other directly. The two residuals also show a significant correlation in Sample 1, although the effect is much less pronounced and considered minor and non-substantive (r = 0.12). Overall, the slight correlated residual outlier appears to be an isolated incident that has little effect on the overall model fit and can be monitored in future studies with new samples.
The evidence from this study suggested that substantive misspecifications of the factor indicators or item cross-loadings, inflated factor inter-correlations, isolated substantive correlated residuals, the possibility of missing factors, over-factoring, item redundancy in factors and other substantive nuisance factors in the SAPOS CFA measurement model should be of little concern.
Contribution and practical implications of the study
With the aim of improving the existing PO theory, this study displayed a very important and relevant shift from the loosely applied CFA global model fit dogma to the detailed examining of local parameter misspecifications or flaws (Greiff & Heene, 2017; Hayduk, 2014; Hayduk et al., 2007; Heene et al., 2012; Ropovik, 2015; Saris et al., 2009). Instead of focusing on the global statistics of ‘model fit’, this study focused on understanding the details of ‘the model providing the (mis)fit’, which may have substantive theoretical implications supported by plausible evidence (Hayduk et al., 2007). The findings of the study are relevant to previous and future research on PO. In reviewing the existing literature, the author identified the apparent drive to obtain sufficient model fit using shorter PO measures while risking construct representativeness and construct generalisability. Apart from taking account of target factor loadings, applied science researchers rarely recognise that CFA model misspecifications involve a large number of parameter estimates (i.e. correlated residuals and cross-loadings) that need rigorous investigation before substantive conclusions about model fit and the underlying theory can be made (Greiff & Heene, 2017; Ropovik, 2015). When researchers rigidly apply CFA assumptions in the face of imperfect factor indicators, the development of plausible measurement models that sufficiently represent theory and the realities of social phenomena is strewn with difficulties (Asparouhov et al., 2015; Marsh et al., 2010; Ropovik, 2015). Where lengthy and multidimensional measures such as the SAPOS need testing, indiscretion in the application of CFA assumptions and global fit indices further escalates the problem of questionable theoretical deductions (Marsh et al., 2004).
In this study, the extent and reasons of parameter misspecifications in the SAPOS measurement model were investigated for the first time using two independent study samples. This study provided important evidence concerning the plausibility of the theoretical deductions that can be made after distinguishing between the substantive and non-substantive misspecifications within the framework of substantive classical test theory (Asparouhov et al., 2015; Muthén & Asparouhov, 2012). Plausible deductions about the structural validity of the SAPOS measurement model could be made with more confidence after accounting for the effect of non-substantive misspecifications or noise on the significance of the chi-square statistic and the relatively weak and unreliable global model fit indices values. This study supplied new, important and detailed evidence to support the SAPOS CFA measurement model as a plausible representation of postulated theory using BSEM. The findings suggest that the SAPOS can be used in related studies without shortening the measure for the sole purpose of improving the CFA’s GoF indices values and sacrificing construct coverage that may have impact on the SAPOS’s validity.
Conclusion
As demonstrated by the findings of this study, a parallelism may be drawn between the problem associated with ML CFA chi-square fit indices and the ideas suggested by Cohen (1994) in his article ‘The Earth is Round (p < 0.05)’. In both cases, the notion is that strictly applying the p < 0.05 cut-off value for statistical significance and a null hypothesis may lead to rejecting small and irrelevant differences from the null hypothesis (Hoijtink & Van de Schoot, 2018, p. 1). This problem has been addressed in BSEM model by allowing for small deviances from the zero variances of the null hypothesis that more realistically represent the phenomenon being studied and also by allowing for some ‘wiggle room’ (Hoijtink & Van de Schoot, 2018, p. 1) when applying the traditional null hypothesis. The findings of this study, as well as of related studies that have been published recently, provide strong support for the effectiveness of the BSEM technique as a diagnostic tool for determining model misspecification in ML CFA studies (De Beer & Bianchi, 2017; Dombrowski et al., 2018; Reis, 2017).
In this study, the SAPOS measurement model fit was explicated using two independent samples, and the findings suggested that a rejection of the CFA model based on a significant chi-square statistic and the unreliable GoF indices would be unrealistic and unfounded in terms of substantive classical test theory.
Study limitations and future research
Plausible alternative models (e.g. second-order and bifactor models) were not investigated in this study. It is recommended that future studies should focus on alternative models. The sample in the current study was limited to a South African population group, and the SAPOS measurement model may not be generalised to other population groups.
Acknowledgements
The author wishes to thank Prof. Chantal Olckers, Department of Human Resources Management, University of Pretoria, South Africa, for making the SAPOS data available for the purposes of this study.
Competing interests
The author declares that he has no financial or personal relationship(s) which may have inappropriately influenced him in writing this article.
Author’s contributions
I declare that I am the sole author of this research article.
Funding information
This manuscript is based on research partly supported by the National Research Foundation of South Africa (Grant Number 103796).
Data availability statement
The Mplus sytax used in the BSEM analysis is available on reasonable request from the corresponding author. The data is the intellectual property of the University of Pretoria and not for sharing.
Disclaimer
The views and opinions expressed in this article are the author’s own and do not reflect an official position of the University of Pretoria or the National Research Foundation of South Africa.
References
Asparouhov, T., & Muthén, B. (2017). Prior-posterior predictive p-values. Retrieved from https://www.statmodel.com/download/PPPP.pdf
Asparouhov, T., Muthén, B., & Morin, A.J.S. (2015). Bayesian structural equation modeling with cross-loadings and correlated residuals. Journal of Management, 41(6), 1561–1577. https://doi.org/10.1177/0149206315591075
Avey, J.B., Avolio, B.J., Crossley, C.D., & Luthans, F. (2009). Psychological ownership : Theoretical extensions, measurement, and relation to work outcomes. Journal of Organizational Behavior, 30(2), 173–191. https://doi.org/10.1002/job.583
Barrett, P. (2007). Structural equation modeling: Adjudging model fit. Personality and Individual Differences, 42(5), 815–824. https://doi.org/10.1016/j.paid.2006.09.018
Belk, R. (1988). Possessions and the extended self. Journal of Consumer Research, 15(2), 139–168. https://doi.org/10.1086/209154
Cohen, J. (1994). The earth is round. American Psychologist, 49(12), 997–1003. https://doi.org/10.1007/BF02195664
Dawkins, S., Tian, A.W., Newman, A., & Martin, A. (2017). Psychological ownership: A review and research agenda. Journal of Organizational Behavior, 38(2), 163–183. https://doi.org/10.1002/job.2057
De Beer, L.T., & Bianchi, R. (2017). Confirmatory factor analysis of the Maslach Burnout inventory. European Journal of Psychological Assessment, 35(2), 217–224. https://doi.org/10.1027/1015-5759/a000392
Dittmar, H. (1992). The social psychology of material possessions: To have is to be. New York : Harvester Wheatsheaf. Retrieved from https://searchworks.stanford.edu/view/2289447.
Dombrowski, S.C., Golay, P., McGill, R.J., & Canivez, G.L. (2018). Investigating the theoretical structure of the DAS-II core battery at school age using Bayesian structural equation modeling. Psychology in the Schools, 55(2), 190–207. https://doi.org/10.1002/pits.22096
Dunn, T.J., Baguley, T., & Brunsden, V. (2014). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105(3), 399–412. https://doi.org/10.1111/bjop.12046
Greiff, S., & Heene, M. (2017). Why psychological assessment needs to start worrying about model fit. European Journal of Psychological Assessment, 33(5), 313–317. https://doi.org/10.1027/1015-5759/a000450
Guay, F., Morin, A.J.S., Litalien, D., Valois, P., & Vallerand, R.J. (2014). Application of exploratory structural equation modeling to evaluate the academic motivation scale. The Journal of Experimental Education, 83(1), 51–82. https://doi.org/10.1080/00220973.2013.876231
Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate data analysis: A global perspective. London: Pearson Education.
Hayduk, L.A. (2014). Shame for disrespecting evidence: The personal consequences of insufficient respect for structural equation model testing. BMC Medical Research Methodology, 14, 124 https://doi.org/10.1186/1471-2288-14-124
Hayduk, L.A., Cummings, G., Boadu, K., Pazderka-Robinson, H., & Boulianne, S. (2007). Testing! testing! one, two, three: Testing the theory in structural equation models! Personality and Individual Differences, 42(5), 841–850. https://doi.org/10.1016/j.paid.2006.10.001
Heene, M., Hilbert, S., Draxler, C., Ziegler, M., & Bühner, M. (2011). Masking misfit in confirmatory factor analysis by increasing unique variances: A cautionary note on the usefulness of cut-off values of fit indices. Psychological Methods, 16(3), 319–336. https://doi.org/10.1037/a0024917
Heene, M., Hilbert, S., Harald Freudenthaler, H., & Bühner, M. (2012). Sensitivity of SEM fit indexes with respect to violations of uncorrelated errors. Structural Equation Modeling, 19(1), 36–50. https://doi.org/10.1080/10705511.2012.634710
Hoijtink, H., & van de Schoot, R. (2018). Testing small variance priors using prior-posterior predictive p values. Psychological Methods, 23(3), 561–569. https://doi.org/10.1037/met0000131
Howard, J.L., Gagné, M., Morin, A.J.S., & Forest, J. (2018). Using bifactor exploratory structural equation modeling to test for a continuum structure of motivation. Journal of Management, 44(7), 2638–2664. https://doi.org/10.1177/0149206316645653
Hsu, H.Y., Skidmore, S.T., Li, Y., & Thompson, B. (2014). Forced zero cross-loading misspecifications in measurement component of structural equation models: Beware of even ‘small’ misspecifications. Methodology, 10(4), 138–152. https://doi.org/10.1027/1614-2241/a000084
Hu, L., & Bentler, P.M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 37–41. https://doi.org/10.1080/10705519909540118
Lai, K., & Green, S.B. (2016). The problem with having two watches: Assessment of fit when RMSEA and CFI disagree. Multivariate Behavioral Research, 51(2–3), 220–239. https://doi.org/10.1080/00273171.2015.1134306
Marsh, H.W., & Hau, K.-T. (2007). Applications of latent-variable models in educational psychology: The need for methodological-substantive synergies. Contemporary Educational Psychology, 32(1), 151–170. https://doi.org/10.1016/j.cedpsych.2006.10.008
Marsh, H.W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341. https://doi.org/10.1207/s15328007sem1103_2
Marsh, H.W., Lüdtke, O., Muthén, B., Asparouhov, T., Morin, A.J.S., Trautwein, U., & Nagengast, B. (2010). A new look at the big five factor structure through exploratory structural equation modeling. Psychological Assessment, 22(3), 471–491. https://doi.org/10.1037/a0019227
Marsh, H.W., Lüdtke, O., Nagengast, B., Morin, A.J.S., & Von Davier, M. (2013). Why item parcels are (almost) never appropriate: Two wrongs do not make a right-camouflaging misspecification with item parcels in CFA models. Psychological Methods, 18(3), 257–284. https://doi.org/10.1037/a0032773
Marsh, H.W., Muthén, B., Asparouhov, T., Lüdtke, O., Robitzsch, A., Morin, A.J.S., & Trautwein, U. (2009). Exploratory structural equation modeling, integrating CFA and EFA: Application to students’ evaluations of university teaching. Structural Equation Modeling: A Multidisciplinary Journal, 16(3), 439–476. https://doi.org/10.1080/10705510903008220
McNeish, D. (2016). On using Bayesian methods to address small sample problems. Structural Equation Modeling, 23(5), 750–773. https://doi.org/10.1080/10705511.2016.1186549
McNeish, D., An, J., & Hancock, G.R. (2018). The thorny relation between measurement quality and fit index cut-offs in latent variable models. Journal of Personality Assessment, 100(1), 43–52. https://doi.org/10.1080/00223891.2017.1281286
Morin, A.J.S., Katrin Arens, A., & Marsh, H.W. (2015). A bifactor exploratory structural equation modeling framework for the identification of distinct sources of construct-relevant psychometric multidimensionality. Structural Equation Modeling, 23(1), 116–139. https://doi.org/10.1080/10705511.2014.961800
Muthén, B., & Asparouhov, T. (2012). Bayesian structural equation modeling: A more flexible representation of substantive theory. Psychological Methods, 17(3), 313–335. https://doi.org/10.1037/a0026802
Muthén, L.K., & Muthén, B. (2017). Mplus 8 user’s guide. Los Angeles, CA: Muthén & Muthén. Retrieved from https://doi.org/10.1111/j.1600-0447.2011.01711.x
Olckers, C. (2013). Psychological ownership: Development of an instrument. SA Journal of Industrial Psychology, 39(2), 1–13. https://doi.org/10.4102/sajip.v39i2.1105
Olckers, C., & Van Zyl, L. (2017). Measuring psychological ownership: A critical review. In C. Olckers, L. Van Zyl, & L. Van der Vaart (Eds.), Theoretical orientations and practical applications of psychological ownership (pp. 61–78). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-70247-6_4
Perry, J.L., Nicholls, A.R., Clough, P.J., & Crust, L. (2015). Assessing model fit: Caveats and recommendations for confirmatory factor analysis and exploratory structural equation modeling. Measurement in Physical Education and Exercise Science, 19(1), 12–21. https://doi.org/10.1080/1091367X.2014.952370
Pierce, J.L., Kostova, T., & Dirks, K.T. (2001). Toward a theory of psychological ownership in organizations. Academy of Management Review, 26(2), 298–310. https://doi.org/10.5465/AMR.2001.4378028
Reis, D. (2017). Further insights into the German version of the multidimensional assessment of interoceptive awareness (MAIA). European Journal of Psychological Assessment, 35(3), 317–325. https://doi.org/10.1027/1015-5759/a000404
Reise, S.P., Bonifay, W.E., & Haviland, M.G. (2013). Scoring and modeling psychological measures in the presence of multidimensionality. Journal of Personality Assessment, 95(2), 129–140. https://doi.org/10.1080/00223891.2012.725437
Rindskopf, D. (2012). Next steps in Bayesian structural equation models: Comments on, variations of, and extensions to Muthén and Asparouhov (2012). Psychological Methods, 17(3), 336–339. https://doi.org/10.1037/a0027130
Ropovik, I. (2015). A cautionary note on testing latent variable models. Frontiers in Psychology, 6, 1–8. https://doi.org/10.3389/fpsyg.2015.01715
Sánchez-Oliva, D., Morin, A.J.S., Teixeira, P.J., Carraça, E.V., Palmeira, A.L., & Silva, M.N. (2017). A bifactor exploratory structural equation modeling representation of the structure of the basic psychological needs at work scale. Journal of Vocational Behavior, 98, 173–187. https://doi.org/10.1016/j.jvb.2016.12.001
Saris, W.E., Satorra, A., & Van der Veld, W.M. (2009). Testing structural equation models or detection of misspecifications? Structural Equation Modeling: A Multidisciplinary Journal, 16(4), 561–582. https://doi.org/10.1080/10705510903203433
Sass, D.A., & Schmitt, T.A. (2010). A comparative investigation of rotation criteria within exploratory factor analysis. Multivariate Behavioral Research, 45(1), 73–103. https://doi.org/10.1080/00273170903504810
Schmitt, T.A. (2011). Current methodological considerations in exploratory and confirmatory factor analysis. Journal of Psychoeducational Assessment, 29(4), 304–321. https://doi.org/10.1177/0734282911406653
Stromeyer, W.R., Miller, J.W., Sriramachandramurthy, R., & DeMartino, R. (2015). The prowess and pitfalls of Bayesian structural equation modeling: Important considerations for management research. Journal of Management, 41(2), 491–520. https://doi.org/10.1177/0149206314551962
Thurstone, L.L. (1947). Multiple factor analysis. Chicago, IL: University of Chicago.
Van Dyne, L. Van, & Pierce, J.L. (2004). Psychological ownership and feelings of possession: Three field studies predicting employee attitudes and organizational citizenship behavior. Journal of Organizational Behavior, 25(4), 439–459. https://doi.org/10.1002/job.249
Wolf, E.J., Harrington, K.M., Clark, S.L., & Miller, M.W. (2013). Sample size requirements for structural equation models: An evaluation of power, bias, and solution propriety. Educational and Psychological Measurement, 76(6), 913–934. https://doi.org/10.1177/0013164413495237
|