Deadwood Detector

Problem
Crowdsourcing participants for studies is a great way to collect lots of data quickly and economically (Heer and Bostock 2010, Kosara and Ziemkiewicz 2010). However, many participants are "deadwood" in that they are simply looking for the monetary compensation and are not paying sufficient attention to the experiment tasks.

Solution
Various approaches have been proposed to motivate crowdsourced workers (often called Turkers) and filter out those who did not pay proper attention (Callison-Burch 2009, Downs et al. 2010, Ipeirotis 2010, Rogstadius et al. 2011, Shaw et al. 2011). However, many of these approaches require additional steps (e.g., adding dummy tasks) or damage the validity of the study (e.g., removing outliers based on task performance).

An effective and universally applicable approach is to measure the randomness of a crowdsourced worker's performance while completing tasks. This approach is based on the assumption that deadwood Turkers randomly select responses so that they quickly get through the whole experiment, so their responses are often random: in other words, following uniform distribution. Thus, filtering out participant whose performance is not consistent over time (i.e., p > threshold) effectively filters out deadwood from the collected data.

Consequences
By identifying deadwood Turkers and eliminating their data from the experiment, crowdsourcing-based approaches become a viable option to collect data from a large number of experimental participants.

Examples
This approach was used in a recent crowdsourced study to eliminate deadwood from collected data (Kim et al. 2012).