Call for Papers

Crowdsourcing has received tremendous attention from industry and academia in recent years. But studying the range of crowdsourcing behaviors, applications, and issues remains interesting. Such issues include quality management, crowdsourcing under resource (e.g., cost/budget, time, energy especially on mobiles) constraints, contextual adaptations to the crowd, flexible management, privacy-utility trade-off, exploration-exploitation trade-off, human behavior sensitivity, and game-theoretic design and engineering of multiagent crowdsourcing mechanisms to achieve particular high-level goals.

The primary goal of this workshop is to synthesize the research on crowdsourcing and multiagent systems. This goal will be achieved by bringing together researchers from a broad community to discuss and disseminate ideas for agent-based modeling, analysis, and evaluation of crowdsourcing systems. A variety of views has emerged on the use of crowdsourcing, across research communities, but so far there has been little effort to analyze crowdsourcing environment as multiagent systems. The aim of this workshop is to provide such a forum; such that, it creates the time and involvement required to subject the different views to a rigorous discussion. It is expected that the workshop would result in a set of short papers that will clearly argue the positions on the issue. These papers will serve as a base resource for consolidating research in both fields and moving it forward. Further, we expect that the discussions at the workshop would provide basic specifications for metrics, benchmarks, and evaluation approaches that can then be considered by the wider community.

We invite submission of short papers which motivate and discuss the overlap between the research fields of multiagent systems and crowdsourcing. We encourage submissions identifying and clearly articulating problems in modeling crowdsourcing systems using multiagent approach or algorithms designed for improving the process of crowdsourcing through agent-based models. We welcome early work, and particularly encourage submission of visionary position papers that provide possible directions towards improving the validity of evaluations and benchmarks. Topics include but are not limited to:

  • Agent-based modeling methods for using existing simulation tools
  • Engineering of multiagent crowdsourcing systems
  • Game-theoretic design of crowdsourcing systems
  • Cost-quality-budget, privacy-utility, and exploration-exploitation trade-offs in crowdsourcing
  • Benchmarking tools for comparing crowdsourcing platforms or services
  • Truthful and strategy-proof incentive mechanisms in crowdsourcing systems
  • Utility based models for requesters and workers agents
  • Definitions of equilibrium in crowdsourcing environments
  • Datasets with detailed spatiotemporal characterization of crowd workers
  • Domain or application specific datasets for the evaluation of crowdsourcing techniques
  • Generalized metrics for task aggregation methods in crowdsourcing
  • Generalized metrics for task assignment techniques in crowdsourcing
  • Evaluation methods for online algorithms using offline collected data
  • Simulation methodologies for testing crowdsourcing algorithms

Each submitted paper should focus on one dimension of crowdsourcing as a multiagent system. We encourage multiple submissions per author for articulating distinct topics for discussion at the workshop. Papers are welcome to argue the merits of an approach or problem already published on earlier work by the author (or anyone else). Papers should clearly identify the analytical and practical aspects of evaluation methods and their specificity in terms of crowdsourcing tasks, application domains, and/or type of platform. During the workshop, papers will be grouped together into tracks, with each track further elaborating upon a particularly critical area meriting further work and study. In continuing discussion beyond the workshop, organizers and participants will co-author a summary paper articulating a roadmap of important challenges and approaches for our community to pursue.

Important Dates

  • Paper submissions: 7 February 2017
  • Paper noti cations: 2 March 2017
  • Camera-ready submissions: 17 March 2017
  • Workshop date: 8 May 2017

Submissions Guidelines & Publication

Submitted papers must be original contributions that are unpublished and are not currently under consideration for publication. Participants should submit a paper (maximum 15 pages), describing their work on one or more of the topics relevant to the workshop. Alternatively, participants may submit a shorter paper (maximum 8 pages) presenting a research statement or perspective on topics relevant to the workshop. Accepted papers will be presented during the workshop and will be published in the workshop proceedings. Authors are requested to prepare their papers by following the LNCS Springer instructions found at http://www.springer.de/comp/lncs/authors.html. All submissions are conducted via the workshop website. Submissions should include the name(s), affiliations, and email addresses of all authors. We welcome the submission of papers rejected from the AAMAS 2017 technical program. The deadline for receipt of submissions is 7 February 2017. Papers received after this date may not be reviewed. All submissions will be reviewed for relevance, originality, significance, validity, and clarity. All papers selected for publication will be reviewed by at least two reviewers.

Visionary and Best Papers

The most visionary paper will be published by Springer in a book under the Lecture Notes in Artificial Intelligence (LNAI) Hot Topics series. The book will be a compilation of the most visionary papers of the AAMAS2017 Workshops, where one paper will be selected from each AAMAS2017 workshop. Additionally, the best paper will be published by Springer in a book under the Communications in Computer and Information Science (CCIS) series. The book will be a compilation of the best papers of the AAMAS2017 Workshops, where one paper will be selected from each AAMAS2017 workshop. Authors of the selected most visionary paper and the best paper are expected to provide their latex promptly upon request.