The impact of screen size on crowdsourced image classification

Vinod Ahuja, Andrea Wiggins, Shivani Mudhelli

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Screen size has potential to afect the results of crowdsourced image classification and impact data quality, even for simple tasks like determining whether an animal is present in an image or not. We evaluated the rates at which volunteers judged images to be blank for six Snapshot Safari citizen science projects on the Zooniverse platform, and found that in all but one project, people using mobile devices were more likely to say an image was blank than those using a desktop computer. Qalitative evaluation further demonstrated that screen brightness can also play a role in reliability for volunteers using mobile devices. These findings can be taken into consideration in the design and testing of apps and platforms for crowdsourced image classification to support reliability and data quality.

Original languageEnglish (US)
Title of host publicationCSCW 2019 Companion - Conference Companion Publication of the 2019 Computer Supported Cooperative Work and Social Computing
PublisherAssociation for Computing Machinery
Pages127-131
Number of pages5
ISBN (Electronic)9781450366922
DOIs
StatePublished - Nov 9 2019
Event22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing, CSCW 2019 - Austin, United States
Duration: Nov 9 2019Nov 13 2019

Publication series

NameProceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW

Conference

Conference22nd ACM Conference on Computer-Supported Cooperative Work and Social Computing, CSCW 2019
Country/TerritoryUnited States
CityAustin
Period11/9/1911/13/19

ASJC Scopus subject areas

  • Software
  • Computer Networks and Communications
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'The impact of screen size on crowdsourced image classification'. Together they form a unique fingerprint.

Cite this