Conference Schedule

Thursday, September 9th

8:00am
8:30am
Registration
8:30am
8:45am
Opening Remarks
8:45am
9:30am
Shapiro
9:30am
10:15am
Rodgers
10:15am
10:30am
Morning Break
10:30am
11am
Cai
11:00am
11:30am
Zhang
11:30am
noon
Nicewander
noon
1:30pm
Lunch Break
1:30pm
2:15pm
Millsap
2:15pm
3:00pm
Moustaki
3:00pm
3:15pm
Afternoon Break
3:15pm
4:00pm
MacCallum
4:00pm
4:45pm
du Toit
4:45pm
5:30pm
Boker
6:30pm
9:00pm
Banquet

 

Friday, September 10th

8:30am
9:15am
Cudeck
9:15am
10:00am
MacEachern
10:00am
10:15am
Morning Break
10:15am
11:00am
Gonzalez
11:00am
11:45am
Tateneni
11:45am
12:30pm
Craigmile
12:30pm
12:45pm
Closing Remarks

 

Titles and Abstracts

Steven Boker, University of Virginia, Department of Psychology
Michael Martin, University of Zurich

Title: Towards a Factor Algebra of Sufficiency

Abstract: Factor analysis involves an algebra of linear combinations. The covariance between observed indicators is accounted for by the existence of one or more latent variables. The covariance of a linear combination is at the heart of this theory. This factor algebra of crossproducts has proved to be remarkably successful for a wide class of problems. Abstractly, this algebra includes a set (indicators and latent variables), a weighting operation (multiplication), and a combination operation (addition). In modern algebraic terms, this an example of a "ring". But some psychological constructs do not fit neatly into a factor algebra of crossproducts where the expected change in an indicator is proportional to the expected change in a latent construct. For instance, consider the behavior of indicators for the latent construct "life satisfaction". Life satisfaction might be indicated by four or five domains for one person and four or five very different domains for another person. In fact, it is virtually certain that the indicators for life satisfaction change within-individual as we age: life satisfaction involves very different indicators for a child, for a young adult, and for an older adult. And yet, theoretically, the meaning of the latent construct "life satisfaction" should not change as a person ages. In this chapter we explore algebraic rings that could be used for a latent variable analysis of sufficient sets of indicators: weighting operations and combination operations that could imply latent variables with desirable statistical properties but without relying solely on an algebra of linear combinations.


Li Cai
, UCLA, Graduate School of Education and Information Studies

Title: Three cheers to the Asymptotically Distribution Free Theory of Estimation and Inference: Some Recent Applications in Linear and Nonlinear Latent Variable Modeling

Abstract: Though initially proposed in the context of (linear) covariance structure analysis, the logic behind the Asymptotically Distribution Free (ADF) method, so clearly explicated by Browne (1984), continues to be a major source of inspiration for researchers working in other parts of linear and nonlinear latent variable modeling. A single review would probably fail to do justice to the profound influence of Browne's original paper on ADF, as evidenced by the over 600 citations in a broad array of disciplines. Instead, the chapter is about several recent applications of the ADF method. I will first review the connections of two recently developed goodness-of-fit testing procedures with the ADF method. The first is the limited-information overall goodness-of-fit test for item response theory models. The second is a goodness-of-fit test for a normal theory two-stage estimator for mean and covariance structure analysis under missing data when the first stage relies on the Expectation Maximization algorithm. The third application is new, in which I extend the above two-stage estimator so that the first stage involves multiple imputation to fill in missing data. A chi-square distributed goodness-of-fit test statistic is also derived.



Peter Craigmile,
Ohio State University, Department of Statistics
Mario Peruggia, Ohio State University, Department of Statistics
Trisha Van Zandt, Ohio State University, Department of Psychology

Title: Latent Variable Models for Discriminating Trend, Dependence, and Tail Structure in Response Time Data

Abstract: Analysis of response time (RT) data collected in psychology and human performance experiments usually fails to consider that responses across trials are correlated. Furthermore, the overall speed of responding may naturally fluctuate over time, even when the experimental conditions do not change. Despite these two facts (sequential dependencies and nonlinear trends across trials), standard data analysis practices such as analysis of variance or maximum likelihood estimation proceed as if response time series are independent random samples within subjects and experimental conditions.  In previous work (Craigmile, Peruggia, and Van Zandt, Psychometrika,to appear) we constructed hierarchical Bayesian models that realistically capture trend, dependence, and tails structure that are observed in simple RT data. In this work we extend our modeling strategies to incorporate the class of sequential sampling models
discussed in Ratcliff and Smith (Psychological Review, 2004). We discuss how latent processes can be built based on these sequential sampling models and investigate, using RT experiments conducted at The Ohio State University, how imposed experimental changes relate to specific latent parameters of these models.



Robert Cudeck
, Ohio State University, Department of Psychology

Title: Estimating the Correlation between Two Variables when Individuals are Measured Repeatedly

Abstract: In many research projects individuals are measured repeatedly, and it sometimes happens that the number of repeated measurements varies between persons. Varying numbers of measurements can arise because some subjects are followed up for a longer period of time than others, or because of attrition, missed appointments, or experimenter errors. In the research that motivated this project, individuals were measured on two variables under several different conditions that were assigned randomly. The number of   measurements varied between two and fifteen. Although the number of measurements is unequal, this is not a missing data problem.  The main interest is the correlation between X and Y, taking the hierarchical structure of the data into account. A class of appropriate models, a direct method of estimation, and a connection to other latent variable models is based on a classic article by Browne (1977) concerning patterned covariance matrices.



Stephen du Toit
, Scientific Software International, Inc.

Title: Structural Equation Modelingusing a Mixture of Continuous and Ordinal Variables

Abstract: There has been a growing interest in recent years in fitting models to data collected from cross-sectional and longitudinal surveys that use complex sample designs. This interest reflects expansion in requirements by policy makers and researchers for in-depth studies of social processes. An important feature of software for the analysis of structural equation models (SEMs) is their facility to deal with a wide class of models for the analysis of latent variables (LVs). In the social sciences, and increasingly in biomedical and public health research, LV models have become an indispensible statistical tool. There are basically three major reasons for the utility of LV models. First, this kind of model can summarize information contained in many response variables by a few latent variables. Second, when properly specified, a LV model can minimize the biasing effects of errors of measurement in estimating treatment effects. Third, LV models investigate effects between primary conceptual variables, rather than between any particular set of ordinary response variables. This means that a LV model is often viewed as more appropriate theoretically than is a simpler analysis with response variables only.  The present paper deals with the use of design weights to fit SEM models to a mixture of continuous and ordinal manifest variables with or without missing values with optional specification of stratum and/or cluster variables. It also deals with the issue of robust standard error estimation and the adjustment of the chi-square goodness of fit statistic. Results based on a simulation study and on real data are presented. (Research was supported by SBIR grant 5R44AA014999-03 from NIAAA to Scientific Software International.)



Richard Gonzalez, University of Michigan, Department of Psychology

Title: Dyadic Data Analysis


Robert MacCallum, University of North Carolina, Department of Psychology
Taehun Lee, UCLA, Graduate School of Education and Information Studies

Title: Fungible Parameter Estimates in Latent Growth Curve Models

Abstract: Waller (Psychometrika 2008) described the phenomenon of fungible weights in linear multiple regression.  Given a least squares regression analysis involving p³3 independent variables and yielding a vector of regression coefficients, , and a squared multiple correlation, R2, suppose we then choose a value of the squared multiple correlation, R*2, that is slightly smaller than R2.  Waller showed that it is possible to obtain an infinite number of sets of alternative weights, , for the independent variables each of which yields this slightly suboptimal R*2.   Waller also demonstrated that some of these sets of alternative weights may be considerably different from the least squares weights and from each other.  These alternative vectors of weights are “fungible” in that they all yield the same (suboptimal) R*2.  MacCallum, Lee, and Browne (2010) examined the same phenomenon in the context of structural equation modeling (SEM).  Suppose a specified structural equation model is fit to a sample covariance matrix, S, by maximum likelihood and yields a vector of parameter estimates, , and a minimized value of the ML discrepancy function, .  And suppose we then choose a value of the discrepancy function, , that is just slightly larger than .  MacCallum et al. showed that if there are 2 or more parameters being estimated, then there exists an infinite number of different parameter vectors, , that yield a discrepancy function value .  They described and illustrated a computational procedure for producing fungible parameter vectors in SEM.  The present work considers fungible parameter estimates in linear latent growth curve models as a special case of SEM.  The phenomenon is demonstrated using empirical data, focusing on fungible estimates of fixed and random effects representing aspects of change over time.  Implications for theory and practice are discussed.



Steven MacEachern, Ohio State University, Department of Statistics

Title: Nonparametric Bayesian Modeling of Item Response Curves

Abstract: Item response theory is widely used in educational testing, and it is seeing growing use in many other areas, most notably as a diagnostic tool for health care.  In this work, we describe a nonparametric Bayesian approach to item response theory, extending existing models, developing parallels of core concepts in item response theory, and illustrating the techniques.



Roger Millsap, Arizona State University, Department of Psychology

Title: A Simulation Paradigm for Evaluating “Approximate Fit” in Latent Variable Modeling

Abstract: A common practice in latent variable modeling is to evaluate whether a model offers a good approximation to the available data based on global indices of approximate fit, such as the root mean square error of approximation (RMSEA; Steiger & Lind, 1980; Browne & Cudeck, 1993). The tacit premise of this approach is that for some range of values of such an index, it can be concluded that the model, while not perfect, is misspecified to an acceptable degree.  One problem with this premise is that the range of values corresponding to an acceptable misspecification in unclear.  Recent studies suggest that no single range of values or cutpoint will serve this purpose in all circumstances (e.g. Saris, Satorra, & van der Veld, 2009).  As an alternative, a simulation method is described that evaluates the distribution of the fit index under alternative models. The alternative models are chosen to represent “acceptable” misspecifications.  The method forces the investigator (1) to be explicit about the defintions of “acceptable” misspecifications, and (2) to base fit decisions on simulation results rather than conventional fit index cutpoints.  Some examples of the use of the method are described, and its advantages are discussed.



Irini Moustaki, London School of Economics

Title: Latent variable models: a review of modern estimation methods

Abstract: Full maximum likelihood, limited information and Bayesian estimation methods have been proposed in the literature for estimating the parameters of  latent variable models with categorical and continuous responses. In this talk, we will review the different estimation methods and focus on their pros and cons. Estimation methods will be compared in terms of their computational complexity, properties of estimators, standard errors and goodness-of-fit tests available. Complex models such as those for ordinal and longitudinal data will be looked at. Simulated data will be used for assessing the bias and efficiency of the estimators obtained under the different methods for different sample sizes, number of items and number of factors.



Alan Nicewander, Pacific Metrics Corporation

Title: Associations Among Observed Variables and IRT Latent Traits

Abstract: Using factor analytic versions of IRT models, and the assumption that the latent variables underlying test items are distributed N(0,1),  exact solutions for the correlations between observed and latent variables can be derived.  These latent correlations are functions of the IRT item-slope parameters, ai, and the biserial correlations between external variables and test items measuring the latent variables.  Using a variant of these assumptions, it is further shown that the correlations between pairs of IRT latent traits have an exact solution that involves the IRT item-slope parameters and the tetrachoric correlations between items that measure the latent variables.  Applications of these correlations—and comparisons to approximate solutions—are discussed.



Joseph Lee Rodgers, University of Oklahoma, Department of Psychology
William H. Beasley, University of Oklahoma, Departement of Psychology

Title: Extending the Bootstrap Using a Univariate Sampling Model to Multivariate Settings

Abstract: When  the bootstrap is applied to test correlations coefficients, the typical and classic method involves a bivariate sampling frame (e.g., see Diaconic and Efron, 1983, Scientific American).  Lee and Rodgers (1998, Psych Methods) proposed a univariate sampling alternative, which produced a distribution around the hypothesized null correlation (a so-called "hypothesis-imposed" test).  They demonstrated advantages of robustness, power, and Type I Error control in certain circumstances.  Beasley et al (2007, Psych Methods) extended the bootstrap using univariate sampling in two ways.  The first involved a transformation that diagonalized the sampling frame to an a priori specified correlation.  The second developed the "observed-imposed" version to test correlation hypotheses, in which the target correlation was the observed sample correlation (contrast to the "hypothesis- imposed" version, in which the target correlation is the hypothesized population correlation).  Using monte carlo simulation, this method was shown to improve on all other bootstrap methods (and frequently on the parametric test ), is more straightforward conceptually, and does not require any bias-correction or acceleration.  Beasley (2010, Doctoral dissertation) developed and tested a Bayesian version of the Beasley et al approach.

         The use of a univariate sampling model can be extended to multivariate settings, and will ultimately be broadly useful to the extent that it can be applied in factor analysis, SEM, and other multivariate methods.  To quote Zu and Yuan (2009, MBR, p. 35), "we expect univariate bootstrap to be valuable for mediation analysis after it is extended to path models." 

         In the current presentation, we use the methods developed previously to extend the univariate-sampling approach to testing bivariate correlations to settings involving multivariate correlations and covariances.  We begin with a simple conceptual demonstration of the extension to multiple regression settings.  Following, we show how bootstraps based on univariate sampling methods can be used to evaluate exploratory factor analysis results.  The method has even stronger conceptual attraction to test confirmatory factor analysis results, however.  The advantages of the method in factor analysis settings is discussed, and a research agenda is defined to evaluate its effectiveness.



Alexander Shapiro, Georgia Institute of Technology, School of Industrial and Systems Engineering

Title: Statistical Inference of Moment/Covariance Structures

Abstract: In this talk we give a general overview of the asymptotic theory of testing moment/covariance structural models. In particular we discuss a difference between first and second order approaches to approximation of test statistics and implications of boundary conditions.



Krishna Tateneni, Tatenti Consulting LLC
Megan Schiller, Tateneni Consulting LLC

Title:  An Application of Data-Based Components Analysis:  Segmenting Consumers on the Basis of Nutrition Attitudes

Abstract:  The confusion between Principal Components analysis (PC) and Factor Analysis models (FA) is pervasive.  In the market research industry, we have observed not only a misunderstanding of the purposes of the two methods, but that default options in standard statistical software packages frequently lead methodologists to carry out PC and assume they are looking at output from FA.  Browne and Tateneni (2009) compare and contrast PC with FA; they also present a new program for Data-Based Components Analysis (DBCA) which is based on singular value decomposition of a data matrix with no computation of eigenvalues and eigenvectors of a covariance/correlation matrix.  Our goal is to illustrate the use of DBCA in the context of consumer segmentation on the basis of attitudes towards nutrition.  The application of DBCA in the context of consumer segmentation also allows us to highlight an often-missed aspect of this method: the simultaneous rotation of component loadings and component scores.  We hypothesize two components, one consisting of attitudes towards organic foods and the other of attitudes towards meal preparation.  To make the demonstration more practical, we also measure various behaviors and profile the segments on the behaviors as well as basic demographic variables.



Guangjian Zhang, University of Notre Dame, Department of Psychology

Title: Use of a Sandwich-type Standard Error Estimator in Latent Variable Models

Abstract: Browne (1984) presented a sandwich-type standard error estimator for generalized least squares estimation of latent variable models. The standard error estimator is appropriate for non-normal data and misspecified models. In this talk, we describe how to modify the sandwich-type standard error estimator when manifest variables are ordinal and when manifest variables are time series data. Its implications for model misspecification and the normality assumption will also be discussed.