Complementary Participants

Problem
Many visualization systems are designed for a particular expert user population, but getting access to this population for evaluation purposes is often very difficult. For example, a visual analytics system such as Jigsaw (Stasko et al. 2008) is intended for expert analysts, but finding a good number of actual analysts that are willing to invest the time to help evaluate the system is difficult.

Solution
Run two versions of the evaluation: a smaller version with a small number of expert analysts, and a larger version with non-expert participants selected from the general population. The tasks and datasets for the two versions can be radically different. Similar to Complementary Studies, the few expert participants allow for retaining ecological validity and may be able to offer deep insights on the visualization, whereas the larger pool of general participants provide internal validity and information on human motor, perceptual, and cognitive abilities not specific to experts.

Consequences
The Complementary Participants pattern does not entirely remove the need to engage expert participants for an evaluation, but it does ease the burden by radically reducing the number of such participants needed.

Examples
The visual analytics tool Jigsaw (Stasko et al. 2008) has primarily been evaluated using general non-expert participants (often university students), such as in the qualitative evaluation performed by Kang et al. (2009). However, Jigsaw has also been utilized by professional analysts (although not reported in the same paper). Andrews et al. (2010) used Complementary Participants and Complementary Studies in evaluating their Analyst's Workstation tool, engaging four professional analysts in one study, and eight students in another.