Examining the S factor in US states - CiteSeerX

Report 2 Downloads 18 Views
SOCIAL SCIENCES

 Examining the S factor in US states EMIL O. W. KIRKEGAARD

READ REVIEWS ✎ WRITE A REVIEW CORRESPONDENCE:

[email protected] DATE RECEIVED:

June 10, 2015 , © Kirkegaard This article is distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.

Abstract I analyzed the S factor in US states by compiling a dataset of 25 diverse socioeconomic indicators. Results show that Washington DC is a strong outlier, but if it is excluded, then the S factor correlated strongly with state IQ at .75. Ethnoracial demographics of the states are related to the state’s IQ and S in the expected order (White>Hispanic>Black). Introduction and data sources In my previous two posts, I analyzed the S factor in 33 Indian states and 31 Chinese regions. In both samples I found strongish S factors and they both correlated positively with cognitive estimates (IQ or G). In this post I used cognitive data from McDaniel (2006). He gives two sets of estimated IQs based on SAT-ACT and on NAEP. Unfortunately, they only correlate .58, so at least one of them is not a very accurate estimate of general intelligence. His article also reports some correlations between these IQs and socioeconomic variables: Gross State Product per capita, median income and percent poverty. However, data for these variables is not given in the article, so I did not use them. Not quite sure where his data came from. However, with cognitive data like this and the relatively large number of datapoints (50 or 51 depending on use of District of Colombia), it is possible to do a rather good study of the S factor and its correlates. High quality data for US states are readily available, so results should be strong. Factor analysis requires a case to variable ratio of at least 2:1 to deliver reliable results (Zhao, 2009). So, this means that one can do an S factor analysis with about 25 variables. Thus, I set out to find about 25 diverse socioeconomic variables. There are two reasons to gather a very diverse sample of variables. First, for method of correlated vectors to work (Jensen, 1998), there must be variation in the indicators’ loading on the factor. Lack of variation causes restriction of range problems. Second, lack of diversity in the indicators of a latent variable leads to psychometric sampling error (Jensen, 1994; review post here for general intelligence measures). My primary source was The 2012 Statistical Abstract website. I simply searched for “state” and picked various measures. I tried to pick things that weren’t too dependent on geography. E.g. kilometer of coast line per capita would be very bad since it’s neither socioeconomic and very dependent (near 100%) on geographical factors. To increase reliability, I generally used all data for the last 10 years and averaged them. Curious readers should see the datafile for details. I ended up with the following variables:

KIRKEGAARD The Winnower APRIL 23 2015

1

EXAMINING THE S FACTOR IN US STATES : SOCIAL SCIENCES

1. Murder rate per 100k, 10 years 2. Proportion with high school or more education, 4 years 3. Proportion with bachelor or more education, 4 years 4. Proportion with advanced degree or more, 4 years 5. Voter turnout, presidential elections, 3 years 6. Voter turnout, house of representatives, 6 years 7. Percent below poverty, 10 years 8. Personal income per capita, 1 year 9. Percent unemployed, 11 years 10. Internet usage, 1 year 11. Percent smokers, male, 1 year 12. Percent smokers, female, 1 year 13. Physicians per capita, 1 year 14. Nurses per capita, 1 year 15. Percent with health care insurance, 1 year 16. Percent in ‘Medicaid Managed Care Enrollment’, 1 year 17. Proportion of population urban, 1 year 18. Abortion rate, 5 years 19. Marriage rate, 6 years 20. Divorce rate, 6 years 21. Incarceration rate, 2 years 22. Gini coefficient, 10 years 23. Top 1%, proportion of total income, 10 years 24. Obesity rate, 1 year Most of these are self-explanatory. For the economic inequality measures, I found 6 different measures (here). Since I wanted diversity, I chose the Gini and the top 1% because these correlated the least and are well-known. Aside from the above, I also fetched the racial proportions for each state, to see how they relate the S factor (and the various measures above, but to get these, run the analysis yourself). I used R with RStudio for all analyses. Source code and data is available in the supplementary material. Missing data In large analyses like this there are nearly always some missing data. The matrixplot() looks like this:

(It does not seem possible to change the font size, so I have cut off the names at the 8th character.) We see that there aren’t many missing values. I imputed all the missing values with the VIM package (deterministic imputation using multiple regression). Extreme values A useful feature of the matrixplot() is that it shows in greytone the relatively outliers for each variable. We can see that some of them have some hefty outliers, which may be data errors. Therefore, I

KIRKEGAARD The Winnower APRIL 23 2015

2

EXAMINING THE S FACTOR IN US STATES : SOCIAL SCIENCES

examined them. The outlier in the two university degree variables is DC, surely because the government is based there and there is a huge lobbyist center. For the marriage rate, the outlier is Nevada. Many people go there and get married. Physician and nurse rates are also DC, same reason (maybe one could make up some story about how politics causes health problems!). After imputation, the matrixplot() looks like this:

It is pretty much the same as before, which means that we did not substantially change the data — good! Factor analyzing the data Then we factor analyze the data (socioeconomic data only). We plot the loadings (sorted) with a dotplot:

We see a wide spread of variable loadings. All but two of them load in the expected direction — positive are socially valued outcomes, negative the opposite — showing the existence of the S factor. The ‘exceptions’ are: abortion rate loading +.60, but often seen as a negative thing. It is however open to discussion. Maybe higher abortion rates can be interpreted as less backward religiousness or more freedom for women (both good in my view). The other is marriage rate at -.19 (weak loading). I’m not sure how to interpret that. In any case, both of these are debatable which way the proper desirable direction is. Correlations with cognitive measures And now comes the big question, does state S correlate with our IQ estimates? They do, the correlations are: .14 (SAT-ACT) and .43 (NAEP). These are fairly low given our expectations. Perhaps we can work out what is happening if we plot them:

Now we can see what is going on. First, the SAT-ACT estimates are pretty strange for three states: KIRKEGAARD The Winnower APRIL 23 2015

3

EXAMINING THE S FACTOR IN US STATES : SOCIAL SCIENCES

California, Arizona and Nevada. I note that these are three adjacent states, so it is quite possibly some kind of regional testing practice that’s throwing off the estimates. If someone knows, let me know. Second, DC is a huge outlier in S, as we may have expected from our short discussion of extreme values above. It’s basically a city state which is half-composed of low s (SES) African Americans and half upper class related to government. Dealing with outliers – Spearman’s correlation aka. rank-order correlation There are various ways to deal with outliers. One simple way is to convert the data into ranked data, and just correlate those like normal. Pearson’s correlations assume that the data are normally distributed, which is often not the case with higher-level data (states, countries). Using rank-order gets us these:

So the correlations improved a lot for the SAT-ACT IQs and a bit for the NAEP ones. Results without DC Another idea is simply excluding the strange DC case, and then re-running the factor analysis. This procedure gives us these loadings:

(I have reversed them, because they were reversed e.g. education loading negatively.) These are very similar to before, excluding DC did not substantially change results (good). Actually, the factor is a bit stronger without DC throwing off the results (using minres, proportion of var. = 36%, vs. 30%). The reason this happens is that DC is an odd case, scoring very high in some indicators (e.g. education) and very poorly in others (e.g. murder rate). The correlations are:

So, not surprisingly, we see an increase in the effect sizes from before: .14 to .31 and .43 to .69. Without DC and rank-order

KIRKEGAARD The Winnower APRIL 23 2015

4

EXAMINING THE S FACTOR IN US STATES : SOCIAL SCIENCES

Still, one may wonder what the results would be with rank-order and DC removed. Like this:

So compared to before, effect size increased for the SAT-ACT IQ and decreased slightly for the NAEP IQ. Now, one could also do regression with weights based on some metric of the state population and this may further change results, but I think it’s safe to say that the cognitive measures correlate in the expected direction and with the removal of one strange case, the better measure performs at about the expected level with or without using rank-order correlations. Method of correlated vectors The MCV (Jensen, 1998) can be used to test whether a specific latent variable underlying some data is responsible for the observed correlation between the factor score (or factor score approximation such as IQ — an unweighted sum) and some criteria variable. Altho originally invented for use on cognitive test data and the general intelligence factor, I have previously used it in other areas (e.g. Kirkegaard, 2014). I also used it in the previous post on the S factor in India (but not China because there was a lack of variation in the loadings of socioeconomic variables on the S factor). Using the dataset without DC, the MCV result for the NAEP dataset is:

So, again we see that MCV can reach high r’s when there is a large number of diverse variables. But note that the value can be considered inflated because of the negative loadings of some variables. It is debatable whether one should reverse them. Racial proportions of states and S and IQ A last question is whether the states’ racial proportions predict their S score and their IQ estimate. There are lots of problems with this. First, the actual genomic proportions within these racial groups vary by state (Bryc, 2015). Second, within ‘pure-breed’ groups, general intelligence varies by state too (this was shown in the testing of draftees in the US in WW1). Third, there is an ‘other’ group that also varies from state to state, presumably different kinds of Asians (Japanese, Chinese, Indians, other SE Asia). Fourth, it is unclear how one should combine these proportions into an estimate used for correlation analysis or model them. Standard multiple regression is unsuited for handling this kind of data with a perfect linear dependency, i.e. the total proportion must add up to 1 (100%). MR assumes that the ‘independent’ variables are.. independent of each other. Surely some method exists that can handle this problem, but I’m not familiar with it. Given the four problems above, one will not expect near-perfect results, but one would probably expect most going in the right direction with non-near-zero size.

KIRKEGAARD The Winnower APRIL 23 2015

5

EXAMINING THE S FACTOR IN US STATES : SOCIAL SCIENCES

Perhaps the simplest way of analyzing it is correlation. These are susceptible to random confounds when e.g. white% correlates differentially with the other racial proportions. However, they should get the basic directions correct if not the effect size order too. Racial proportions, NAEP IQ and S For this analysis I use only the NAEP IQs and without DC, as I believe this is the best subdataset to rely on. I correlate this with the S factor and each racial proportion. The results are: Racial group White Black Hispanic Other

NAEP IQS 0.69 -0.5 -0.38 -0.26

0.18 -0.42 -0.08 0.2

For NAEP IQ, depending on what one thinks of the ‘other’ category, these have either exactly or roughly the order one expects: W>O>H>B. If one thinks “other” is mostly East Asian (Japanese, Chinese, Korean) with higher cognitive ability than Europeans, one would expect O>W>H>B. For S, however, the order is now O>W>H>B and the effect sizes much weaker. In general, given the limitations above, these are perhaps reasonable if somewhat on the weak side for S. Estimating state IQ from racial proportions using racial IQs One way to utilize all the four variable (white, black, hispanic and other) without having MR assign them weights is to assign them weights based on known group IQs and then calculate a mean estimated IQ for each state. Depending on which estimates for group IQs one accepts, one might use something like the following: State IQ est. = White*100+Other*100+Black*85+Hispanic*90 Or if one thinks other is somewhat higher than whites (this is not entirely unreasonable, but recall that the NAEP includes reading tests which foreigners and Asians perform less well on), one might want to use 105 for the other group (#2). Or one might want to raise black and hispanic IQs a bit, perhaps to 88 and 93 (#3). Or do both (#4) I did all of these variations, and the results are: Variable Race.IQRace.IQ2Race.IQ3Race.IQ4 Race.IQ 1 0.96 1 0.93 Race.IQ20.96 1 0.96 0.99 Race.IQ31 0.96 1 0.94 Race.IQ40.93 0.99 0.94 1 NAEP IQ 0.67 0.56 0.67 0.51 S 0.41 0.44 0.42 0.45

As far as I can tell, there is no strong reason to pick any of these over each other. However, what we learn is that the racial IQ estimate and NAEP IQ estimate is somewhere between .51 and .67, and the racial IQ estimate and S is somewhere between .41 and .45. These are reasonable results given the problems of this analysis described above I think. Added March 11: New NAEP data I came across a series of posts by science blogger The Audacious Epigone, who has also estimated IQs based on NAEP data. He has done this three times (for 2013, 2009 and 2005 data), so along with McDaniels estimates, this gives us 4 non-identical estimates. First, we check their intercorrelations,

KIRKEGAARD The Winnower APRIL 23 2015

6

EXAMINING THE S FACTOR IN US STATES : SOCIAL SCIENCES

which should be very high, r>.9, for this kind of data. Second, we extract the general factor and use it as the best estimate of NAEP IQ for the states (I deleted DC again). Third, we see how all 5 variables relate to S from before. Results: NAEP.IQ.13NAEP.IQ.09NAEP.IQ.05NAEP M.NAEP.1 NAEP.IQ.090.96 NAEP.IQ.050.83 0.89 NAEP M. 0.88 0.93 0.96 NAEP.1 0.95 0.99 0.95 0.97 S 0.81 0.76 0.64 0.69 0.75

Where NAEP.1 is the general NAEP factor. We see that intercorrelations between NAEP estimates are not that high, they average only .86. Their loadings on the common factor is very high tho, .95 to .99. Still, this should result in improved results due to measurement error. And it does, NAEP IQ x S is now .75 from .69. Scatter plot

Supplementary material Data files and R source code available on the Open Science Framework repository. References Bryc, K., Durand, E. Y., Macpherson, J. M., Reich, D., & Mountain, J. L. (2015). The Genetic Ancestry of African Americans, Latinos, and European Americans across the \nUnited States. The American Journal of Human Genetics, 96(1), 37-53. Jensen, A. R., & Weng, L. J. (1994). What is a good g?. Intelligence, 18(3), 231-258. Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Kirkegaard, E. O. W. (2014). The international general socioeconomic factor: Factor analyzing international rankin\ngs. Open Differential Psychology. McDaniel, M. A. (2006). State preferences for the ACT versus SAT complicates inferences about SATderived stat\ne IQ estimates: A comment on Kanazawa (2006). Intelligence, 34(6), 601-606. Zhao, N. (2009). The Minimum Sample Size in Factor Analysis. Encorewiki.org.

KIRKEGAARD The Winnower APRIL 23 2015

7