InfoVis Evaluation Patterns Wiki

Welcome to the Visualization Evaluation Patterns Wiki
The Visualization Evaluation Patterns Wiki is a shared repository for disseminating, discussing, and deriving patterns for how to effectively evaluate visualization systems. What is a pattern? From Christopher Alexander's seminal book "A Pattern Language" (1977), we find the following definition:


 * Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice. (Alexander et al. 1977, page x)

A visualization evaluation pattern, then, is a reusable solution to a commonly occuring problem in InfoVis evaluation. This Wiki contains a collection of such patterns organized by their abstraction level: study (high level), methodology (mid-level), or trial (low level).

''' This Wiki is a work in progress! Please watch this space for updates as we add the evaluation patterns we have derived so far! '''

Purpose
The purpose with the Visualization Evaluation Patterns Wiki is to provide a shared repository of evaluation patterns for visualization. This will achieve the following:
 * 1) Disseminate existing knowledge and experience on visualization evaluation to the broader community;
 * 2) Standardize the naming and vocabulary of established visualization evaluation methods in the field; and
 * 3) Provide a forum for members of the visualization community to contribute new patterns, or improve existing ones.

Evaluation Patterns
Here follows a list of the evaluation patterns presented on this Wiki:

Study-Level Patterns

 * Pair Analytics
 * Factor Mining
 * Complementary Studies
 * Pilot Study

Method-Level Patterns

 * Coding Calibration
 * Complementary Participants
 * Deadwood Detector
 * Human Blackbox
 * Paper Baseline

Trial-Level Patterns

 * Luck Control
 * Time-Accuracy Elimination
 * Trial Mining