Social Science Standards

| | Comments (4)

I've been looking at some empirical work that's been done gauging people's attitudes about race. I've been taking these studies to be empirically sound and up to social science standards. As I'm going through the comments on my chapter dealing with this, I'm seeing some worry that my arguments rely on surveys that aren't up to social science standards because they're not representative enough. I have no idea how to evaluate such a charge.

What exactly does a study need to do to meet social science standards? One study I'm looking at was an internet survey of 449 people. The initial contacts responded to ads on a university campus and were told to invite other people they knew to participate, and the authors of the study say that the invitations allowed it to mushroom into a more diverse group. The ages ranged from 18 to 82, with a mean age of 35 (SD=13.38). They asked for self-identification racially, and they got 64% identifying as European American, 14% African American, 9% Latino/a, 5% Asian American, 3% Biracial/Multiracial, .2% Native American/Alaskan, and 4% Other or None of the Above. For educational background, 3% had just high school, 23% some college, 23% college grads, 15% some graduate school, and 36% completed graduate school. 29% were from the Midwest, 24% from the West, 17% Midatlantic, 18% South, 7% New England, and 5% Southwest.

The full text of the study is here.

Can anyone with a background in social science give me a sense of whether it's fair to charge studies like this one with not being representative enough and therefore not up to social science standards? The only thing I can detect is some geographical skewing and a heavy emphasis on people with higher education backgrounds, but the authors acknowledge the latter and checked to see if responses differed significantly from one education level to another and detected no problems there. Is the sample size large enough? I don't have a sense of how these things are supposed to be done.


I work in marketing research, and that sample size definitely would be considered adequate for our purposes. We have one project that I work on for the federal government that has a sample size in that vicinity and they have no problem utilizing the results in their decision making process.

Was the skewing corrected for by weighting? The data is very skewed for education so you would (in my opinion) want it to be weighted if you want to claim that the data is representative of the overall American population. If it was weighted you would want to know what your effective sample size is (weighting increases variance in your data which means that you need to account for weighting when doing statistical tests - I've seen it cut sample size down to 1/6 of the original in some cases). If it's weighted and the effective sample size is under 100 then the results may or may not be good.

I don't know if they did any weighting. They did indicate that they were aware of the educational skewing and did look at the answers from different educational categories, which they discovered weren't statistically different enough for them to worry about.

As I think about it more, I understand a bit where they're coming from. It's not a very 'random' method for getting people if people are recruiting their friends to take a survey. So, it's not surprising that there is a bias in the education responses. And if the population is biased there you wonder what other biases there could be across the board (what if you have some particularly open minded/closed minded individual who recruits tons of friends)?

This type of thing isn't so uncommon in social science research because better methods are often cost prohibitive and may require the recruitment of a firm like the one I work for.

Now, with that said, while I would utilize caution, I certainly wouldn't say that the data shouldn't be used.

Hi Jeremy,
I'm a doctoral student in Communication Studies, specializing in interpersonal communication with a strong interest in interracial communication, so I'm familiar with a bit of the race literature in our field. I'd echo the sentiment that by social science standards, the sample size is quite good.

Given the relatively high level of educational attainment (especially 36% completed graduate school!) this sample is certainly a bit skewed, but that's often the case with snowball sampling. Nevertheless, the snowball sampling method is often a good way to recruit for this type of study. The challenge with this type of research (common in the social sciences) is that while random assignment is often held up as the gold standard, there's simply no way to randomly assign participants to racial or educational categories. The ideal method would probably involve some form of stratified random sampling, but that requires massive resources that very few researchers have access to.

Also, though the authors reported a significant MANOVA test of race conception scales as a function of education level (p. 27), in fact, as a statistical test MANOVA tends to be oversensitive (prone to find significant results too easily, even if they don't necessarily make a substantive difference). The fact that the univariate tests (apparently ANOVA - which is more stringent in this case) for the individual race conception scales as a function of education were all non-significant lends support to the contention that education did not unduly influence results on this variable.

In short, I'd say that while a more ideal sample could potentially be collected (which is the case with *any* study), this one appears to be nothing to sneeze at and would certainly pass muster in my field.

Hope this helps!

Leave a comment


    The Parablemen are: , , and .



Books I'm Reading

Fiction I've Finished Recently

Non-Fiction I've Finished Recently

Books I've Been Referring To