HumBL@WWW2019


the third international workshop on Augmenting Intelligence with Bias-Aware Humans­-in-­the-­Loop
co-located with TheWebConf (WWW2019)
(HumL workshop series)

Human-in-the-loop is a model of interaction where a machine process and one or more humans have an iterative interaction. In this paradigm the user has the ability to heavily influence the outcome of the process by providing feedback to the system as well as the opportunity to grab different perspectives about the underlying domain and understand the step by step machine process leading to a certain outcome. Amongst the current major concerns in Artificial Intelligence research are being able to explain and understand the results as well as avoiding bias in the underlying data that might lead to unfair or unethical conclusions. Typically, computers are fast and accurate in processing vast amounts of data. People, however, are creative and bring in their perspectives and interpretation power. Bringing humans and machines together creates a natural symbiosis for accurate interpretation of data.

Crowdsourcing has become a successful method to obtain the human computation needed to augment algorithms and perform high quality data management. Humans, though, have various cognitive biases that influence the way they interpret statements, make decisions and remember information. If we use crowdsourcing to generate ground truth, it is important to identify existing biases among crowdsourcing contributors and analyze the effects that their biases may produce. At the same time, having access to a potentially large number of people can give us the opportunity to handle the biases in existing data and systems.

The goal of this workshop is to bring together researchers and practitioners in various areas of AI (i.e., Machine Learning, NLP, Computational Advertising, etc.) to explore new pathways of the human-in-the-loop paradigm. We aim to analyze both existing biases in crowdsourcing, and explore various methods to manage bias via crowdsourcing. We would like to discuss different types of biases, measures and methods to track bias, as well as methodologies to prevent and mitigate different types of bias. We will provide a framework for discussion among scholars, practitioners and other interested parties, including crowd workers, requesters and crowdsourcing platform managers.

Joint Keynote Series

This year, for the first time, the HumBL workshop will run a joint keynote series in collaboration with the SAD workshop. The joint series will feature five keynotes from speakers across industry and academia:


Keynotes on May 13 @ HumBL2019:


Keynotes on May 14 @ SAD2019:

Anurag Batra
Google AI
Share your world: Partnering with global communities to build smarter, more inclusive ML
Building ML-based models that work equally well for diverse global populations may be a daunting challenge. However, global diversity also offers an opportunity. We’ll talk about how crowdsourcing with global communities facilitates the creation of high-quality data sets to solve problems of visual or linguistic understanding, specialized knowledge or subjective opinion. We’ll show examples of how subjectivity plays a role in unpredictable ways, and talk about techniques to counter (or leverage) that.
Bio: Anurag Batra is a Product Manager with Google AI, focusing on partnering with global communities of people to bring diversity and inclusion to ML training data. Most recently, his team used crowdsourcing to release Open Images Extended, a data set that aims to bridge diversity gaps in the larger Open Images data set. When not mulling over data in ML, you can find Anurag biking the great outdoors with his sons aged 9 and 11.


Jon Chamberlain
University of Essex
Are two heads better than one? An exploration of ambiguity in crowd-collected language decisions from the Phrase Detectives game.
The online game-with-a-purpose Phrase Detectives has been collecting decisions about anaphoric coreference in human language for over 10 years (4 million judgements from 40,000 players). Unlike most crowd systems, the game collects multiple valid solutions for a single task, which complicates aggregation through traditional statistical methods. Analysis of the ambiguous decisions that players make highlights the need for understanding and resolving disagreement that is inherent in language interpretation. This talk will present some of the interesting cases of ambiguity found by the players of Phrase Detectives and propose methods for harnessing crowds that disagree with each other.
Bio: Dr Jon Chamberlain is a lecturer and applied research scientist based at the University of Essex, England. He has been the lead developer behind the Phrase Detectives game-with-a-purpose for over 10 years and is co-investigator on the Disagreements and Language Interpretation (DALI) project that builds on early work to understand ambiguity in human language.


Brad Klingenberg
StitchFix
Humans, machines and disagreement: lessons from production
Bio: Brad Klingenberg is the VP of Algorithms at Stitch Fix, an online personal styling service that commits to its recommendations by physically delivering inventory to clients. Brad and his team use statistics, machine learning and human-in-the-loop algorithms to optimize the Stitch Fix client experience, the management of inventory and the selection of items for clients. Prior to joining Stitch Fix, Brad received his PhD in Statistics from Stanford University and worked as a data scientist in technology and financial services.


Maria Stone
Apple
What we talk about when we talk about crowdsourcing
Bio: Maria Stone is a veteran of the search industry, having worked at Alta Vista, Google, Yahoo, and Microsoft. She started the UX Research team at Google, from its inception in 2001 until 2008, and then worked as a Data Scientist at Yahoo, Microsoft and Apple. Prior to her work in industry, she was an academic researcher and lecturer focused on studying human memory and attention. She holds a Ph.D. in Cognitive Psychology from UC Berkeley, where she worked with Daniel Kahneman. She has authored publications in Cognitive Psychology, HCI, and Information Retrieval.

Program

May 13th 2019, 9am, room RegencyA + RegencyB

  • 09:00 - 09:10 Welcome, overview, schedule
  • 09:10 -10:10 Keynote Talk: Jon Chamberlain, University of Essex.
    Are two heads better than one? An exploration of ambiguity in crowd-collected language decisions from the Phrase Detectives game
  • 10:10 - 10:30 Marc Bron, Ke Zhou, Andrew Haines and Mounia Lalmas. Uncovering Bias in Ad Feedback Data: Analyses & Applications
  • 10:30-11:00 Coffee Break
  • 11:00 - 12:00 Keynote Talk: Anurag Batra, Google AI.
    Share your world: Partnering with global communities to build smarter, more inclusive ML
  • 12:00 - 12:15 Gianluca Demartini. Implicit Bias in Crowdsourced Knowledge Graphs
  • 12:15 - 12:30 Alfredo Alba, Chad DeLuca, Anna Lisa Gentile, Daniel Gruhl, Linda Kato, Chris Kau, Petar Ristoski and Steve Welch. Task Oriented Data Exploration with Human-in-the-Loop
  • 12:30-14:00 Lunch Break
  • 14:00 - 14:20 Shima Imani, Sara Alaee and Eamonn Keogh. Putting The Human In The Time Series Analytics Loop
  • 14:20 - 14:35 Wenlong Sun, Sami Khenissi, Olfa Nasraoui and Patrick Shafto. Debiasing the Human-Recommender System Feedback Loop in Collaborative Filtering
  • 14:35 - 14:50 Lauren Fratamico. I'm Lonely. Who should I talk to?
  • 14:50 - 15:05 Alfredo Alba, Chad DeLuca, Anna Lisa Gentile, Daniel Gruhl, Linda Kato, Chris Kau, Petar Ristoski and Steve Welch. Identifying High Value Opportunities for Human in the Loop Lexicon Expansion
  • 15:05 - 15:30 Final discussion
  • 15:30-16:00 Coffee Break

Important Dates

All dates are 23:59 Hawaii Time
  • Abstract submission: 20 January 2019 ASAP before paper deadline
  • Paper submission deadline: 1 february 2019 EXTENDED: 8 february 2019
  • Author notification: 28 February 2019
  • Final version deadline: 3 March 2019
  • Workshop date: 13 May 2019

Call for Contributions

Topics

  • Human Factors:
    • Human­­-computer cooperative work
    • Mobile crowdsourcing applications
    • Human Factors in Crowdsourcing
    • Social computing
    • Ethics of Crowdsourcing
    • Gamification techniques
  • Data Collection:
    • Data annotations task design
    • Data collection for specific domains (e.g. with privacy constraints)
    • Data privacy
    • Multi­-linguality aspects
  • Machine Learning:
    • Dealing with sparse and noisy annotated data
    • Crowdsourcing for Active Learning
    • Statistics and learning theory
  • Applications:
    • Healthcare
    • NLP technologies
    • Translation
    • Data quality control
    • Sentiment analysis
  • Bias in Crowdsourcing:
    • Contributor and crowd worker sampling bias during the recruitment
    • Effect of cultural, gender and ethnic biases
    • Effect of worker training and past experiences
    • Effect of worker expertise vs interest
    • Bias in experts vs bias in crowdsourcing
    • Bias in outsourcing vs bias in crowdsourcing
    • Sources of bias in crowdsourcing: task selection, experience, devices, reward, etc.
    • Taxonomies and categorizations of different biases in crowdsourcing
    • Task assignment/recommendation for reducing bias
    • Effect of worker engagement on bias
    • Responsibility and ethics in crowdsourcing and bias management
    • Preventing bias in crowdsourcing
    • Creating awareness of cognitive biases among crowdsourcing agents
  • Crowdsourcing for Bias Management:
    • Identifying new types of cognitive bias in data or content using crowdsourcing
    • Measuring bias in data or content using crowdsourcing
    • Removing bias in data or content using crowdsourcing
    • Presenting bias information to end users to create awareness
    • Ethics of data collection for bias management
    • Dealing with algorithmic bias using crowdsourcing
    • Fake news detection with crowdsourcing
    • Diversification of sources by means of crowdsourcing
    • Provenance and traceability in crowdsourcing
    • Long-term crowd engagement
    • Generating benchmarks for bias management through crowdsourcing

Authors can submit four types of papers:

  • short papers (up to 6 pages in length), plus unlimited pages for references
  • full papers (up to 10 pages in length), plus unlimited pages for references
  • position papers (up to 4 pages in length), plus unlimited pages for references
  • demo papers (up to 4 pages in length), plus unlimited pages for references
  • Page limits include diagrams and appendices. All submissions must be written in English.
    The proceedings of the workshops will be published jointly with the conference proceedings, therefore submissions should be formatted according to the formatting instructions in the General Guidelines for the WebConference and must be submitted in PDF according to the ACM format published in the ACM guidelines, selecting the generic “sigconf” sample. The PDF files must have all non-standard fonts embedded.
    Please submit your contributions to EasyChair.

Organization

Lora Aroyo Google

Alessandro Checco University of Sheffield

Gianluca Demartini University of Queensland, Australia

Ujwal Gadiraju L3S Research Center

Anna Lisa Gentile IBM Research Almaden

Oana Inel TU Delft

Cristina Sarasua University of Zurich

Program Committee

  • Alessandro Bozzon, Delft University of Technology
  • Irene Celino, CEFRIEL
  • Lydia Chilton, Columbia University
  • Djellel E. Difallah, NYU Center for Data Science
  • Anca Dumitrache, VU University Amsterdam
  • Carsten Eickhoff, Brown University
  • Daniel F. Gruhl, IBM Research
  • Ricardo Kawase, L3S Research Center
  • Walter Lasecki, University of Michigan
  • Praveen Paritosh, Google
  • Bibek Paudel, University of Zurich
  • Marta Sabou, Vienna University of Technology
  • Mike Schaekermann, University of Waterloo
  • Elena Simperl, University of Southampton
  • Amrapali Zaveri, Maastricht University

Contact Us