At the Public Insight Lab, we practice what we preach. So before releasing our website, we decided to ask the “crowd” for feedback. Specifically, we paid some workers on Turk to review our site and give some feedback.
Not surprisingly, we landed some important insights. For one, we needed to provide more detailed contact information. “Whenever I go to a site, I always want the contact info at the top or bottom of the home page. I assume most people do. It just makes it easier,” one Turker told us. (As you’ll see, we took their advice.)
Another Turker argued that we needed to mention some of our clients. “Possibly some other organizations or projects they have worked on. I don't want to feel they are using their clients' to prove themselves, but an example or two might be nice,” someone noted. (Again, you’ll notice that we took their advice.
Overall, though, people were positive. “I like that there isn't a huge amount of text and that it clearly explains what they do. The blog is a nice touch as well,” one Turker said. “I like the fact that the potential price for services is upfront, mentioned on the very first page,” said another Turker.
This review was cheap. Just $13.98. And it took less than a day to land the feedback, and so we release the site knowing that we’re delivering quality. After all, the crowd has already given feedback.
Several studies have shown that MTurk produces high-quality research results, and in many cases, the data tool produces research that’s of higher quality than more traditional methods.
Mturk has been shown, for instance, to be a powerful tool to produce nationally representative results. For example, a Northwestern study published in 2016 compared the results of 20 large political and sociological surveys, each of which were administered to both an MTurk sample and a nationally-representative population-based sample.
After reviewing the data, the Northwestern experts found that the results were extremely similar between Mturk and the other data sources for nearly all surveys, including overall demographic data, the direction of results, and statistical significance.
In some fields, MTurk is actually better source of research data. A University of Oxford study from 2011 argued that Mturk was a better source of data than many other currently used sources for research. Among other things, the researchers argued that that MTurk participants were far more diverse than the American college samples typically used in psychological research.
According to researchers, substantial psychological testing of MTurk participants has show that MTurkers respond similarly in experiments to other participants. More exactly, Turkers exhibit classic psychological effects similarly to in-lab participants, including the heuristics, reasoning errors, and decision biases common in everyday life. This research also shows that they have similar rates of attention to detail and following directions as participants from traditional recruitment sources.
Even the highest-ranking social science journals regularly publish studies based on MTurk data. Over the past few years, for instance, dozens--if not hundreds--of studies have relied upon MTurk data have been published in high-ranking journals like American Sociological Review, Law & Society Review, and Psychological Science.
To be sure, Turk has its issues. In the geeky parlance of researchers, it provides a convenience sample and to extrapolate findings, researchers should weight results to make the results more representative. There are also some biases in the population, and people using the site tend to be whiter and younger, according to researchers.
But in the end, it’s clear that sites like Mturk are the future of insight research because they’re fast, inexpensive and high-quality. Or as one recent paper concluded after a long review of the evidence, “Mturk is a fast and cost-effective way to collect nonprobability samples that are more diverse than those typically used by psychologists.”
By Joe McFall and Ulrich Boser
When it comes to surveys, people often ask me: How big of a sample do I need?
It's a hard question to answer and there are several key things to keep in mind. For one, the larger the sample, the more likely it is to be representative of the target population; this is called "external validity." However, it's not that easy.
Just because a sample is large, doesn't make is representative. Therefore, size alone will not answer the question of whether your 20 or 20,000 respondents are nationally representative. There's nothing magical about 20 or 20,0000 except that they're prettier numbers than 19,787 (at least to most people).
The only way a sample is truly representative is if it contains the same proportions and combinations of all the millions of measured and unmeasured variables in the target population. The best way to ensure that is to have a large, random probability sample, which MTurk and other crowd-sourced platforms do not directly provide.
The issue is a sample of convenience. The people on crowdsourcing platforms will not include people who don't like to do research, people with lots of money, people with poor tech skills, people who don't use Amazon products, etc.
However, that doesn't mean that crowdsourced data cannot be nationally representative. You have to compare the sample characteristics to the national population parameters to see if the sample appears representative, and if not, weight the sample accordingly. To get technical, this is generating a "sampling model." This can't guarantee representativeness (due to the convenience sampling), but gets you close enough that hopefully the differences are insignificant.
Now the trick is to determine which sample characteristics to check against the population, or a "proximal similarity model." The population is unfortunately unknown, but the U.S. census provides good enough data (for most research questions, certainly not questions about undocumented residents), to make comparisons. So, if you get the race and gender and age distributions from the census, compare them to your 20 or 20,0000 samples, and then weight your samples to match the census data.
In the end, I recommend that you get as many respondents as you can, then try to supplement by oversampling people who are less represented in your current sample. So, if you have no one from West Virginia, restrict some to that state. Or, if you don't have many Asians, attempt to gain more. That way, you modify the "gradient of similarity" to improve your external validity.
Like in so many things, whether it's beer or life experiences, quality matters more than quantity.
By Joe McFall
In recent years, a growing number of academics have been using crowd-sourced platforms to do market and other forms of research. Indeed, over 1,500 papers have been published using services like MTurk, Click Worker, CrowdFlower, e-Rewards, and more.
But to our knowledge, there's no company that's leveraged crowdsourced platforms exclusively for marketing research, and so Public Insight Labs was born. We aim to take advantage of this gap in the marketplace to provide custom insight solutions that are representative of real consumers throughout the U.S. and abroad.
Some firms may criticize the "convenience sample" nature of crowdsourced data. However, we apply the same analytical approaches used by top researchers to ensure that the data we obtain is representative of the target group. For example, we can use sample weighting to match crowdsourced data to the demographic profiles of your market segment (e.g., race, ethnicity, age, sex, social economic status, education level, or any other characteristic).
In fact, recent research, across a variety of disciplines (health care, business, psychology, sociology, criminal justice, etc.), shows that services like MTurk provide high quality public opinion data comparable to traditional survey and polling approaches, especially when proper methodological and analytical procedures are utilized.
We have expertise in survey and poll design, methodological training to improve measurement quality and response rate, and advanced statistical and analytical skills to provide the most accurate results. Add some clear graphs and figures, and professional insights become yours.
By Joe McFall