Crowdsourcing Services

Crowdsourcing, collecting annotations of data from a distributed group of people online, is a major source of data for AI research. The original idea involved people doing it as volunteers (e.g. Folding@home) or as a byproduct of some other goal (e.g. reCAPTCHA), but most of the data collected in AI today is from paid workers. Recently, Hal Daumé III mentioned on Twitter that Figure Eight, a paid crowdsourcing service, had removed their free licenses for academics, and asked for alternatives. A bunch of people had suggestions which I wanted to record for my own future reference, hence this blog post.

These fell into a few categories:

  • Crowd providers, which directly connect with workers.
  • Crowd enhancers, which provide a layer on top of the providers that adds features (e.g. active learning, nice templates, sophisticated workflows).
  • Annotation tools, which are designed to integrate with crowd providers (or your own internal workers).
  • Interfaces, which make it easier to use one of the crowd providers.

I decided not to break the first two categories apart because it was sometimes unclear whether a service was using their own crowd or providing a layer over another, but I have roughly sorted them. Where possible I have included pricing, though some services did not make it easy to find. Take note of the description in each case because the data collected varies substantially. Also note that many tasks can be structured as a classification task (e.g. “Is this coreference link correct?”), making many of these services more flexible than the ‘text classification’ label below may seem (though structuring your task so costs don’t explode may require some thought).

  • Mechanical Turk, a small set of templates and the option to define a web UI that does whatever you want. Cost is a 20% fee on top of whatever you choose to pay workers (though note it jumps to 40% if you have more than 10 assignments for a HIT!).
  • Figure Eight (included for completeness, did not investigate further due to the cost)
  • Hybrid, seems to be any task you can define in text (including with links?). 40% fee, though there is a discount of some type for academic and non-profit institutions.
  • Prolific, seems to be that you just provide a link to a site for annotations (originally intended for survey research). 30% fee. Last year they had a research grant program.
  • Gorilla, designed for social science research, but could be used for any classification or free text task. Costs $1.19 / response, though note that you construct a questionnaire with a series of questions. There are also discounts available when collecting thousands of responses.
  • Scale, classification tasks for 8c / annotation. There is an academic program, but details are not available online (mentioned here).
  • Amazon SageMaker Ground Truth, text classification for 8c / label, decreasing after 50,000 annotations + a workflow fee of 1.2c / label.
  • iMerit, NER, classification, and sentiment tasks. When used on the Amazon Marketplace they are 5 dollars / hour (India based workers) or 25 (US based workers).

Mechanical Turk Integration Interfaces

These are interfaces for Mechanical Turk that provide an easier way to set up HITs without having to mess with Amazon’s APIs yourself. Both are free, but have slightly different features:

  • LegionTools, self-hosted or not, includes key features for real-time systems.
  • MTurk Manager, self-hosted, includes features for custom views of responses from workers.

Annotation User Interfaces

There are many annotation tools for NLP (e.g. my own, SLATE!), but these annotation tools are designed to integrate with providers above to collect annotations.

  • Prodigy, span classification (e.g. NER), multiple choice questions (which can be used to do a wide range of tasks), and relations (see examples). Cost is whatever you pay a crowd provider + 390 for a lifetime license, or 10k for a university-wide lifetime license, though they also often give free licenses to academics. One distinctive property is that you download and run it yourself, providing complete control over your data.
  • LightTAG, span classification and links. Cost is 1c / annotation + the cost from a crowd provider, but there is an academic license that makes it free.

Related

comments powered by Disqus