Micro Perceptual Human Computation for Visual Tasks

Yotam Gingold, Ariel Shamir, Daniel Cohen-Or
ACM Transactions on Graphics (TOG). Presented at SIGGRAPH 2012.

Paper: PDF (11M) | PDF (4M)
SIGGRAPH presentation: keynote (14M) | PDF (12M) | PDF with notes (12M)
Longer presentation: keynote (13M) | PDF (11M) | PDF with notes (11M)
Input data: images and segmentations (1M, zipped)

The user interacts with an application written in code that runs on both electronic processors and human processors.


Human computation (HC) utilizes humans to solve problems or carry out tasks that are hard for pure computational algorithms. Many graphics and vision problems have such tasks. Previous HC approaches mainly focus on generating data in batch, to gather benchmarks or perform surveys demanding non-trivial interactions. We advocate a tighter integration of human computation into online, interactive algorithms. We aim to distill the differences between humans and computers and maximize the advantages of both in one algorithm. Our key idea is to decompose such a problem into a massive number of very simple, carefully designed, human micro-tasks that are based on perception, and whose answers can be combined algorithmically to solve the original problem. Our approach is inspired by previous work on micro-tasks and perception experiments. We present three specific examples for the design of Micro Perceptual Human Computation algorithms to extract depth layers and image normals from a single photograph, and to augment an image with high-level semantic information such as symmetry.


 author    = {Yotam Gingold and Ariel Shamir and Daniel Cohen-Or},
 title     = {Micro Perceptual Human Computation},
 journal   = {ACM Transactions on Graphics (TOG)},
 volume    = {31},
 number    = {5},
 pages     = {119:1--119:12},
 articleno = {119},
 numpages  = {12},
 doi       = {10.1145/2231816.2231817},
 year      = {2012},
 month     = aug,
 publisher = {ACM Press},
 address   = {New York, NY, USA}