Amazon Mechanical Turk (AMT or MTurk) is a US-based microtask marketplace run by Amazon.com, Inc. It is to our knowledge the oldest microtask marketplace and one of the three largest English-language microtask marketplaces in terms of market volume and number of workers (the others being US-based CrowdFlower and Germany-based Clickworker). It was launched in 2005. Amazon charges clients, called “requesters,” a fee of 20-45% on top of the payment that requesters pay to workers for their labor.
- Registered Workers
- 500,000 (2011); est. 10,000 active
- Workers Are...
- Self Employed
- Payment Model
- Payment per task
- Signed Code of Conduct
- Official Company Name
- Amazon.com, Inc.
- Year Founded
- Headquarters Location
- Seattle, WA, USA
- Mark Chien (Head of Product and Engineering), Sourabh Miglani (Head of Business Operations), Dave Schultz (Head of Business Development), Janak Meyer (Senior Data Scientist) (März 2017)
- Number of Employees
- About 10
Amazon representatives describe AMT as a system originally built to help “clean” data coming into Amazon’s huge product catalog. Amazon was a clearinghouse for products from many different vendors, and could handle everything from payments to inventory and shipping logistics. These varied vendors would sometimes upload entries for identical products. As a result, customers searching for products would see several search results for products that appeared identical. Amazon designers wanted to hide the duplicate entries, but the task of doing so computationally proved “insurmountable” for Amazon engineers. Amazon also did not want to burden vendors with the task of marking other Amazon-listed products identical to that sold by the vendor.
The system designers decided to displace this labor to a “crowdsourced” workforce. Amazon engineers built a site through which Amazon employees, in their spare work time, could contribute to the process of identifying and hiding the duplicate entries. This was successful, and it was eventually opened to workers and requesters outside Amazon. It was extended to support tasks other than duplicate product identification. A mechanism for paying workers was added. And the cheeky but truthful tagline “artificial artificial intelligence” was coined to describe the new service. With these additions, AMT became a prototype for how to expand computer scientists’ agencies over expanded pools and kinds of labor. It became simultaneously the next step in both artificial intelligence and cloud computing. In describing the system in a 2006 lecture at MIT, Amazon CEO Jeff Bezos said, “You’ve heard of software-as-a-service. Well, this is human-as-a-service.”
Source: M. Six Silberman and Lilly Irani, “Operating an employer reputation system“
Mechanical Turk charges clients a 25-40% fee on top of the price awarded to workers.
Self-employed / independent contractors.
Jobs and Clients
MTurk is used by all kinds of individuals and organizations to do data processing, including data cleansing, transcription, translation, metadata creation, categorization, tagging, content moderation, data set generation, and sentiment analysis. Organizations include software startups such as SnapMyLife and C-SATS, large technology companies such as Google, YouTube, and Twitter, and government organizations such as DARPA and the US Army Research Lab. MTurk is also used by academic researchers, especially computer scientists and social scientists. Computer scientists use MTurk mainly to train “machine learning” or “artificial intelligence” algorithms. One example of this is Gilt’s “pre-emptive shipping” program (that is, “using data analytics to predict what [customers] might be buying at a given time”). Social scientists use the platform to recruit human subjects for surveys and experiments.
A typical academic research task („HIT“ or „Human Intelligence Task“) includes a worker selecting and accepting the HIT, accepting an Institutional Review Board (IRB) consent statement, and then proceeding to a series of questionnaires or tasks, usually hosted on an external site such as Qualtrics or SurveyMonkey. The tasks vary but may include typical psychological thought problems such as the trolley problem or the prisoner’s dilemma. Other common surveys include Stroop tasks, surveys on product ideas or designs, or personality tests. Many of these also include priming. After completing the tasks, workers are debriefed, thanked, and given a code to submit on the MTurk HIT page to obtain payment. Pay varies depending on the positions of individual IRBs on MTurk payment. Some say that workers must be paid a fair wage for their labor but others do not endorse a fair wage out of concerns for financial coercion. This is an ongoing debate between workers, requesters, and IRB groups.
A typical content moderation HIT includes viewing photos, video, or text and marking offensive content. There are no standardized instructions for what is considered offensive; requirements vary by requester. Some workers avoid these HITs as they often include pornographic content, graphic material, and extreme violence. Mechanical Turk allows requesters to obscure their identity so most of the time, workers do not know the origin of this content or where it is being used. Completing these HITs can be psychologically difficult or even traumatic for some workers. Pay varies based on the HIT design but is generally a few cents per piece of content.
A typical machine learning HIT is designed to have workers develop data sets or train algorithms to perform actions without human intervention. The HITs often ask workers to analyze a sentence for sentiment, tag specific words in a sentence based on emotions, or determine the intent of a sentence. The results are then compiled and used to further develop machine learning research and systems. Workers often comment on how doing these types of HITs will eventually lead to systems that will make the workers obsolete. One example of this is OCR (optical character recognition). Years ago, the platform had many HITs where workers typed in text from images of numbers or letters. Systems were then developed to do this automatically and these types of HITs are now rarely posted. Machine learning HITs have pay that varies depending on the complexity and length of the given text.
Text by Rochelle LaPlante
The basic work process consists of three major phases. First, requesters design and post their tasks, including setting the price. On the platform, tasks are called HITs (“Human Intelligence Tasks”). Second, workers choose and do tasks. Third, requesters review the work submitted by workers and choose whether or not to pay (“approval” or “rejection”).
Workers are presented with a list of available tasks and self-select which tasks they would like to complete at a price set by the requester. There is no bidding process or bargaining between the requester and worker prior to the acceptance or completion of the task. This process of self-selecting into tasks has led to concerns of selection bias in tasks that require sample randomization, such as surveys for academic research.
Once a worker selects a HIT to complete and clicks to accept it, the worker has a pre-determined period of time — set by the requester — to complete the task. If the timer expires before the worker completes the HIT, the HIT is removed from the worker’s view, the work is not saved, and the worker receives no payment.
After the worker completes a HIT, they click to submit the work. The work is then sent to the requester who has the ability to review it. The requester may choose to either approve the HIT, which releases payment to the worker, or reject the HIT. If the HIT is rejected, the worker receives no payment. According to the Mechanical Turk Terms of Service, the requester has the right to retain all submitted work, whether or not they choose to pay the worker. If the requester does not approve or reject the HIT within 30 days, it is automatically approved, and the worker is paid.
Requesters must prepay for all HITs they post. This makes funds available for disbursal to workers if the work is approved (or if the requester does not review the work within 30 days and the worker must be automatically paid). Requesters must have a United States bank account and Amazon Payments account in order to post HITs, but intermediaries exist that post tasks at an additional fee for requesters that do not have access to these.
Workers can receive payment in US dollars, Indian rupees, or Amazon gift card balance, depending on their country and preference. American workers have a choice between having their payments deposited to a bank account or prepaid debit card, or having their earnings added to their Amazon.com gift card balance. Gift card codes are not available and the gift card balance is non-transferrable. Indian workers have the same options, with payment in Indian rupees. Payment is processed through Amazon Payments. Because of this, workers must have not only a valid MTurk account but also an Amazon Payments account in order to work on the platform. Workers can request one transfer per 24 hour period. There is a minimum transfer amount of $1 per transfer. There are no fees associated with the transfer.
The only option for workers from countries other than the United States and India to receive payment is a transfer of their earnings to an Amazon.com gift card balance. Purchases from country-specific Amazon top level domains are not allowed; workers can only shop at Amazon.com, and may incur large shipping fees.
Amazon does not provide collaboration tools for workers, and workers have little guidance from Amazon on how to perform tasks. In place of that, several independent worker support communities have developed. These come in the forms of worker Facebook groups, IRC channels, and forums. Others use Skype or mobile apps like SnapChat or WhatsApp to share information about HITs and requesters. Workers with coding skills have written browser scripts to enhance the usability and help other workers complete tasks more accurately and quickly. Dozens of scripts can be found on sharing sites such as GreasyFork. Particularly among Indian workers, word of mouth is used to recommend HITs worth working on and share the names of trustworthy requesters. Workers also use a site called Turkopticon to rate requesters.
Issues Facing Workers
For workers in the „developed world“ – which includes most workers, as most Mechanical Turk workers are based in the United States – pay is relatively low compared to pay for similar work not mediated by a platform (see pay data).
Unclear communication between workers, requesters, and Amazon is also an ongoing issue. Amazon provides no messaging system to facilitate communication. If a worker emails a requester through the platform, the worker’s full name and email address is disclosed, violating worker anonymity.
Amazon provides little support to workers. When workers send an email to Amazon, the response is often inadequate, and workers report being given very different answers to the same questions. There is no help ticket system or method for following up on issues described in emails sent to Amazon. Amazon has directed workers to post their questions to worker forums and to volunteer-operated requester rating system Turkopticon in lieu of providing support. MTurk’s documentation is often incorrect or outdated. For example, the Requester’s Best Practice Guide says that “a Worker who receives multiple blocks from different Requesters will be suspended from working on Mechanical Turk.” However, on the Amazon Web Services Mechanical Turk Discussion forum, an Amazon employee wrote:
We ignore blocks without any specific cause for purposes of worker quality. Therefore it cannot lead to account suspension for workers. This is why we freely recommend that Requesters use blocks as a way to limit their worker population because it has no negative effect for the worker’s account.
Accidental or malicious HIT rejections pose another frequently-occurring problem. When work is submitted, requesters have the choice to approve or reject it. If a HIT is rejected, the worker receives no payment and their overall approval rating is lowered. This becomes problematic when other HITs require a high approval rating in order to access them. When a requester rejects a worker’s work, the worker has no way to dispute the rejection. According to Mechanical Turk’s Participation Agreement, often called the „Terms of Service“ or „TOS“:
Because Amazon Mechanical Turk is not involved in the actual transaction between Providers and Requesters, Amazon Mechanical Turk will not be involved in resolving any disputes between participants related to or arising out of the Services or any transaction.
The TOS also transfers ownership of the work to the requester even if the requester rejects it.
There are four main causes of accidental or malicious rejections. First, Amazon recommends a quality control method called „plurality“ to requesters. In this model, a HIT is completed by three workers. If two workers agree and the third disagrees, the work from the third is rejected. Second, the requester may be intentionally scamming the platform with the intention of obtaining free work. Third, a requester may reject large amounts of work because they do not understand the consequences of rejections for workers. Fourth, errors in HIT design or evaluation may lead to accidental and erroneous rejections.
The ethical treatment of workers generally is an ongoing concern. Scholars in fields as diverse as computational linguistics and political science have written about the ethics of using crowd workers as an academic research pool or as a labor force. In this discourse MTurk is often described as a digital piece work system, with relatively low pay and limited (or nonexistent) worker protections. The MTurk TOS asserts that workers are independent contractors. As a result, they are not entitled to minimum wage, a minimum number of hours, dismissal protection, anti-discrimination protection, overtime pay, paid vacation, employer-based health insurance or retirement savings plans, or the right to organize and negotiate collective agreements with requesters or Amazon.
A collaboration between workers, requesters, and researchers led to the creation of a document called the Guidelines for Academic Requesters which outlines some of the common ethical and logistical problems and offers some solutions and best practices to support protection and rights for workers.
Text by Rochelle LaPlante
This information was collected from 25-100 verified workers on the platform in 2016 and 2017. More information
Introduction and Survey Notes
Many of our respondents were professional or semi-professional Turkers. It is important to take this into consideration when reading the rest of these survey results — they primarily reflect the experiences of an elite set of Turkers. For example, we have some reason to believe that overall wages on the platform are somewhat lower than those reported by these participants, who are experts at securing high-paying work on the platform.
There is high competition on MTurk for HITs that pay well, including things like this survey. All HITs for this survey were picked up within a few minutes of its posting. We released a second round at half the price (bonusing participants up to the full amount after completion) to try to capture a wider set of workers.
Nonetheless, about a third of all respondents worked at least 30 hours per week on MTurk. In addition, 80% of respondents reported that their MTurk wages were an important component of their budget. Almost two thirds of these workers said that this income was necessary for meeting their basic needs.
I use Mechanical Turk currently as my primary source of income as someone who is self-employed. I enjoy being able to work from home, choose the hours that I work, and not need to commute. Mechanical Turk allows me to earn income from home without currently needing employment elsewhere.
Hourly Wages MINIMUM: €3.77 MAXIMUM: €29.43 AVERAGE: €10.65 MEDIAN: €8.67
Wages received by MTurk survey participants were fairly high compared to other microtask platforms, although many workers were still making less than minimum wage.
Nonpayment was not a frequently occurring issue for most workers surveyed, although it was frustrating and most participants had at least one nonpayment story to tell.
With a median wage of €8.67, and an average wage of €10.65, a majority of participants were making at least a German minimum wage (€8.85/hour as of January 2017), and an even larger number of the mostly US-based workers were making more than a US minimum wage ($7.25 an hour, approximately €6.60 at time of writing).
Nonetheless, some respondents reported making as little as €3.77 / hour, a very low rate for a self-employed person, who has extra overheads to pay.
Hourly Wage Distribution
As with other microtask platforms, workers reported spending a fair amount of time looking for tasks, as much as one hour spent looking for work for every hour spent actually doing work. The average for survey respondents was closer to one hour spent looking for work for every 3-4 hours spent actually doing work. If this time were taken into account, wages on the platform would be lower.
Sixty percent of survey respondents had experienced nonpayment at least once on the platform, although about three quarters of these people said that it had only happened once or twice.
Frequency of Nonpayment Experiences
Nonetheless this was a very frustrating experience for workers, and many people left comments about their nonpayment stories.
There’s been a few times I’ve been rejected for work that I did and that kind of sucks because I always put in my best effort. Just yesterday a requester rejected my survey because I didn’t get a code for it. It was only worth 80 cents so not a big deal, but it’s the principle of the matter. The survey asked for my worker ID, so the requester had proof I completed it, I was able to list everything about the survey, I even had a screenshot of the final page of the survey showing it was completed, but they would not budge on reversing the rejection because “300 people all got their codes just fine, so I see no reason why you wouldn’t have gotten a code” which is ridiculous.
One time someone new messed up their experiment so the data didn’t properly upload and they just rejected a bunch of people.
Other times bonuses or extra pay is promised, and they don’t pay it. There’s too many to keep track of, and often small pay is promised, so workers usually don’t track them down. I think it usually happens because requesters either forget to pay bonuses or sometimes an outdated template is used and they don’t change the bonus/pay promises. I feel that rarely do they not pay bonuses out on purpose.
More experienced MTurk workers had strategies for avoiding nonpayment issues by only working for requesters that had already built up a good reputation:
I have tried doing work for new requesters that no turkers have worked for before. And sadly sometimes they steal the work and reject our submitted hits. I have learned to stay away from new requesters until another mturker completes some of their work and is paid.
Communication with clients and other workers on Mechanical Turk is generally satisfactory for most survey respondents. However, few workers knew how to communicate with platform management.
In addition to on-platform communication tools, a variety of unofficial websites foster a strong community of MTurk workers.
Communicating with management
Less than a quarter of survey respondents had ever communicated with platform management. Consequently, we do not have a rating for communication with management on this platform. Curiously, even among the survey’s fairly experienced Turkers, forty percent of respondents reported that they did not know how to get in touch with platform management, or believed it impossible. It certainly is not an easy and straightforward part of the worker interface on the platform.
Communicating with clients
Communicating with clients — or requesters as they are called on Mechanical Turk — was another story. Nearly all (92%) of respondents had communicated with a requester at least once. Although few workers left comments about their experiences, their survey responses as graphed here indicate that the experience was generally okay. While few workers felt that client communication was always prompt, respectful, or helpful, for most workers, a majority of their interactions were positive. As one worker noted, even if they did not always get a written response from requesters, it seemed that their messages did go through and have an impact:
Some requesters will not respond when I send them an email about a hit they posted, but the hit will always be approved, so in a way I feel like they did hear me out.
Communicating with other workers
Mechanical Turk does not provide official communication channels for all workers. However, they do provide a forum specific to workers who have earned the “Master” qualification.
Survey respondents who reported communicating with other workers via official channels, generally reported positive experiences, most of the time.
Unofficial Worker Forums and Groups
However, most workers found the most meaningful and useful conversations with fellow workers to occur off the official platform — on facebook groups, reddit, and private worker forums.
Workers use these forums for a mix of socialization with their remote-coworkers:
I use it mainly to chat and socialize
And also to learn about how to work more efficiently and successfully:
Currently I talk with other works on the forum mturkcrowd.com. It’s a good site with helpful people that share HITs as they pop up, scripts to get certain jobs done faster/easier, and just general discussion and questions about work. They prohibit giving specific details of studies though, as requesters would be unhappy with that.
Popular forums include:
While workers generally thought that evaluations of their own work were fair, the platform lacks a number of features that are important to working conditions: a clear process for contesting unfair evaluations, requirements that negative evaluations of workers are backed up by good reasons, and the ability for workers to also evaluate clients.
A small number of respondents reported negative experiences with requesters’ evaluations of them. However, most respondents thought that clients’ evaluations of them and their work were fair most of the time.
Although there is not a platform-mediated protest process (like there is on Upwork), most workers in this survey who had an issue with work rejection were able to contact requesters directly and resolve the situation most of the time:
I usually email the requester and ask what the issue was. Sometimes I provided the wrong survey code. Sometimes the HIT isn’t set up right and the data doesn’t get collected. We can usually resolve the issue.
It would be better if there was a process integrated in the platform workflow for workers to protest an unfair evaluation. Moreover, it is inherently problematic that Mechanical Turk requesters do not have to give good reasons for leaving negative ratings — or rejecting the work of workers. These ratings stick with workers forever, and affect their ability to get new work.
Requesters on Mechanical Turk can limit their tasks to only a set of workers with certain kinds of ‘qualifications.’ However, the process for granting and revoking qualifications is not always clear to workers.
It can sometimes be confusing to understand … why you can or can’t accept a job, how to earn qualifications.
Because many qualifications are given out directly by requesters, it is up to requesters to clarify (or not) the criteria for a particular qualification:
Regarding qualifications, most are fairly clear cut and it’s easy to understand how you get it. Requesters will post qualification tasks/tests and if you pass you get the qual. But some aren’t clearly marked as qualification tests, so you won’t know until later, and it’s possible to end up missing out on a good qual.
Also, Mechanical Turk itself gives out the Master Qualification — a special qualification that can be used by any requester on the platform to limit who can complete their HITs. Yet, from workers’ perspectives, the criteria for earning this qualification are completely opaque. Many workers mentioned this particular qualification in the survey, including these three comments:
mTurk itself has a Masters qualification which from what I’ve gathered, other workers feel is randomly given out. Masters qualification is supposed to be the most “experienced” and best workers but it seems random.
And of course no one knows how Amazon assigns the Masters qualification, and there haven’t been any new ones granted in over a year, so that’s really vague and unclear.
Masters, how’s it assigned? People get it after being banned and after only doing 2000 hits, yet others have done millions or in my case hundreds of thousands, and no masters.
While requesters have significant power over workers’ reputations on the platform, workers do not, in turn, have any way to rate requesters as part of the platform. Notably, however, many use external sites such as Turkopticon to accomplish this. Many experienced workers will not accept HITs from requesters who do not already have good reviews on external worker sites and forums.
Primarily use Turkopticon. Anybody on mturk can rate any requester and complain or compliment them. Rate pay, unfair rejections and how quickly they pay as well. Not 100% accurate, as often information is out of date, and requesters can change from generous to stiff, but it’s still extremely useful.
These third-party worker sites are useful for workers to find out both positive and negative feedback about different requesters, which often includes detailed breakdown about different aspects of the worker-requester relationship:
I’ve utilized this site to give both good feedback, and to warn others, about certain requesters.
I find out a lot about requesters. I get a sense of how fair they are, the time to complete surveys, and if they actually respond to questions.
Workers in this survey were split on how often they find tasks on Mechanical Turk that are meaningful, interesting, fun, or satisfying. However, very few workers reported frequently finding the work to be particularly negative — only a very small number of workers had experience doing dangerous, demeaning or ethically questionable work.
Positive features of tasks on Amazon Mechanical Turk
Although workers did not leave many detailed comments about the tasks on MTurk, a few workers did comment that they liked certain kinds of tasks on MTurk, especially participating in research by filling out surveys:
It is the easiest part time job I have ever had and it has the added bonus of helping research
In addition to the positive feeling of having contributed to research, some workers also liked that participating in work in MTurk helped them learn new things:
I love having the option to work on my days off. I also really appreciate most of the requesters on mturk. I have been a part of some amazing studies and start up companies. I learn something usually every day!
How often is the work ...
Negative features of tasks on Amazon Mechanical Turk
While few workers in this survey frequently completed tasks on MTurk that were dangerous, demeaning, or questionable, we also know that we have many workers in this survey who are professional Turkers, and selective about the tasks that they do. Prior research has drawn attention to the potential psychological harms of content moderation work, in particular. Please see the section above about Jobs and Clients.
How often is the work ...
In general, most workers found the Amazon technology to be reliable and fast. However, nearly a third of respondents raised concerns about how user friendly it was.
One worker noted a frustration with the frequent use of captchas. When they are a part of every microtask that a worker is completing, filling out these forms can significantly cut into a worker’s rate of pay:
Get rid of CAPTCHAs, These cost me 10 to 50 dollars everyweek.
Such little things can make a big difference to the worker experience. Ways of speeding up site use — not just in server response times, but also in better information architecture — could improve the experience for many people.
More generally, this comment from one respondent well summarizes the opinions of many workers regarding the Mechanical Turk technology:
It’s old and not user friendly, but I guess it works. Every worker forum highly suggests using additional programs or scripts in order to aid workers finding work. Using the default mturk website would really suck.
Workers rarely had issues with the site breaking on them, however, almost all survey respondents used external scripts and browser plugins in order to make the site fully functional. That is, while the Amazon technology that exists is functional, it is not really complete.
Things Workers Like
Like respondents on many other platforms, many MTurk workers found the flexibility of the task-based platform to be an important benefit. They could choose when and where to work, and could work remotely. This was important for everyone from freelancers to caregivers to people with anxiety or depression. Finally, several workers commented that they simply enjoyed some of the tasks on the platform — they liked contributing to others’ research and learning new things themselves.
Low barrier to start: Easy to fill in gaps in employment/income
Some survey respondents liked the platform because it was easy to start working. One can sign up and get started working almost right away.
I needed to find some work and it was the best option I had at the time to get started right away. Since then it’s grown into an almost real job with real income so I’m not inclined to move on yet.
For unemployed or under-employed persons, Mechanical Turk was a way to start getting some kind of income stream immediately without waiting on a long application or hiring process.
For freelance gig-economy workers, contracts may be part time, and workers may have long breaks between contracts when they are looking for new work. Mechanical turk provided a way for to flexibly fill in these kind of gaps in their otherwise less-than-full-time employment:
As extra income to supplement my freelance work in web development.
Juggling multiple part-time traditional jobs can also be a challenge, especially when it comes to scheduling. Again, for some respondents MTurk work was a useful supplement to more traditional under-employment.
At work my hours were cut to about 25-30 hours a week with a corresponding cut in salary. So I’ve been working on mTurk to supplement my income.
Flexibility: An alternative to boredom
The flexibility of MTurk to fill in income gaps, or under-employment schedules, was also beneficial for workers who found themselves sometimes bored, and would prefer to use their free time doing something productive and income-producing:
I work because it’s a welcome addition to my finances. The work is available whenever and where I want, I just need a laptop and I can make some extra money. I find myself doing work in stead of doing nothing during times that I am bored. The money is nice to buy toys or presents for myself.
Some workers also used MTurk when they were bored at their primary jobs and needed something to help them stay focused:
We frequently experience lulls in my place of work. On some days I find that I must wait for the completion of a given task in another department before I can commence my own work that day (i.e., checking in new merchandise that I must photograph, edit, and render web-ready). During these lulls, I often complete surveys on MTurk to remain focused and to combat boredom.
Flexibility for care givers
I’m a single parent to 4 kids in 3 different schools. Traditional workplaces are not compatible with my current needs.
My wife is the bread winner, i picked this up while being an at home dad. trying to pull my weight
It makes it easier for me to stay at home with my daughter, and not have to pay childcare expenses. I also have a more flexible schedule this way.
To earn extra income from home while caring for my elderly mother
I need the money and I need to be available at home. My mom has cancer and has a dog that needs attention. I prefer to be here when she needs me during her last years.
I am a stay-at-home mom and work for MTurk during the days when my kids are at school. It helps give us a little extra money for our kids activities. I like having the freedom of being able to work when I want.
Remote work arrangement is beneficial for people with mental health conditions such as anxiety
Multiple participants in our survey commented that the remote work-from-home arrangement worked better for them than a traditional job because they had mental health conditions such as anxiety and depression.
It’s my main source of income. The reason I don’t have a conventional job is mostly due to general anxiety and social anxiety. I also don’t have reliable transportation. I also like working and being at home all day. I don’t get sick of it, I’m comfortable.
I work on Mechanical Turk because i have depression and anxiety. This keeps me from communicating with others in the real world.
Like others who had been laid off from a previous traditional job, working on MTurk provided an immediate income stream:
I lost my last job due to a mental health condition. Working turk allows me to help a little while attending to my medical issues.
Workers have a number of concerns about the platform as a place of work. Common concerns raised in our survey included income instability, competition among workers for the best paying tasks, and worker-requester power imbalances.
Some respondents who were not full-time workers on the platform had concerns about whether MTurk could be a sustainable and reliable income stream:
I work on MTurk to make extra money to pay off bills. I have also worked on MTurk in between jobs to keep receiving an income. I would be afraid to completely do MTurk by itself without another source of income.
Timing is everything, more workers than tasks
One issue that contributes to the income instability concern is the competition among workers for high-paying tasks:
Some days I just want to get down and work and there is very little for me to actually do and it can be frustrating leading to breaks in motivation not to mention less money. Timing is everything. Being on when the good hits drop, knowing when they typically do, and being fast enough to actually get the work. I don’t have very much control how much I make on a day to day basis.
To get the best paying tasks, workers need to be lucky enough to be logged on at just the right time on a particular day, and they need to be fast enough to pick up a task almost as soon as it appears:
I don’t like needing to compete with others in order to accept the tasks quickly on Mechanical Turk so that I can work because sometimes I do not always have work to do if I can’t accept the tasks fast enough.
Ultimately many workers felt that there were more workers looking for tasks than there are good paying tasks to pick up.
I do not like that the workforce is saturated with too many workers.
When workers aren’t able to pick up the few high paying tasks that show up each day on the platform, they are left with the choice to either not work at all, or work on tasks with very low wages.
While a majority of the workers in this survey averaged an hourly wage of at least minimum wage for their work on Mechanical Turk, there were still many people struggling to piece together income from very low-paying HITs:
I dislike how many requesters feel it’s okay to pay a pittance. Too many think that posting work that will take an hour for 2 dollars is perfectly ok. I dislike how standoffish Amazon is about the platform, instead just letting workers sort out their own problems with requesters and not intervening.
Sometimes workers reported not realizing how low-paying a task was going to be until after they would start working on it — for example with a survey that took longer than advertised. Yet, once a worker had accepted a task and started working on it, it was hard to walk away. Accumulating rejections for incomplete or incorrect work can threaten workers ability to get new tasks in the future: requesters commonly limit HITs to workers who only have a certain task acceptance rating, e.g., 95%.
Requester-Worker Power Imbalance
Many survey respondents commented on frustrating experiences with work rejection. While rejection was fairly rare, it was a common frustration for many survey respondents. Rejection of work not only means not getting paid for time spent doing a task, it also affects workers’ ability to get new work. Requesters often set up tasks such that they are only available to workers with a certain task acceptance rating (e.g., 95% or greater).
Thus, many workers are rightly outraged that requesters can reject their work without an explanation.
I don’t like that requesters are so free to reject work without giving an explanation. It rarely happens, but when it does, you feel like you’ve been scammed.
In addition, many workers are frustrated that Amazon does not help mediate between workers and clients in these kinds of situations:
If a requester decides to reject your work, there is no way to contest this and have Amazon make a fair ruling. This is completely up to the requester and you basically did their work for free if they decide to be dishonest. It hurts morale sometimes.
Scam requesters who reject for bogus reasons knowing they can get away with it due to non-interest of amazon in evaluating cases, even when dozens of people get rejections for bogus reasons.
Instead, it is up to the worker to take on the extra labor of negotiating payment for the work they have already completed — often taking up more time than the microtask itself had taken up.
I personally haven’t had this problem before, but I have heard about it, and I wish it could change. When a hit that I do is rejected, I can contact the requester about it, and see what happened. However it is completely up to the requester to either reverse the rejection or leave it. Some work is rejected unfairly, so this leaves the worker feeling frustrated and helpless. I would like to see something that gives the workers a little more of a voice regarding work being rejected.
Just like workers can be banned from the platform by Amazon, some workers would like to see dishonest, low-paying, or otherwise problematic requesters also banned from the platform.
Some requester post shit work and I don’t know about it until I’m too deep to quit. I want these requesters booted off the platform so that I can find better work from better requesters.
Some workers would alternatively like for requesters become better educated about how to use the platform and its features. Some of this work should on requesters to do their own research, but some of this burden falls on Amazno who does not have good documentation on their site to help requesters out who are new to the platform and do not know how things work:
The only real thing I don’t like about Mturk is that there seems to be little quality control over requesters. I’ve seen instances in the past where a requester has no issues and mostly positive reviews suddenly start sending out mass rejections to workers. Often times they don’t understand how Mturk themselves work and use tools and features in the wrong way which can damage a workers reputation. An example of this is when they use the block user feature to prevent someone from doing their work more than once. This block is a negative mark on a workers record and should not be used in that way. The requester documentation is very lacking. Just last week I had to explain to a new requester how to pay me a bonus. Things like this should be made obvious.
Turking Effectively Requires Volunteer-Worker-Supported Software
Novice workers often struggle to make even a basic minimum wage on the platform. Only over time, as workers find the right worker communities, learn what addons to get for their browser, and become expert at the platform mechanics, can they make a more reasonable wage.
Before finding the forums there was no way I would be able to make a living wage. The scripts and extensions that are shared through this community have been unbelievably helpful as long with the people that provide help.
Moreover, some workers were concerned that the platform is really only fully functional when they install extra tools which are supported only through the volunteer labor of other workers. Since these scripts and plugins are important to the ability of workers to work on the platform, they would like to see Amazon incorporate these changes into the platform directly — and perhaps to pay the workers who did the initial work.
I dislike how most of the positive UI changes that have helped workers have been made by 3rd parties (workers) rather than done by Amazon itself. I’d probably incorporate a lot of the most popular script features into mturk itself if I had the option.
Ability to Refuse Payment Rejection:negative ratingClients may refuse payment for any or no reason. Worker has no right to contest.
Change to Terms of Service: Change:negative ratingUnilateral change to terms without notice. Worker's or client's continued use of the site signals acceptance of changed terms.
Warranty Warranty:negative ratingIn the event of defects, client refuses payment. There is no opportunity for the worker to improve the work.
Contact with Employers Client Contact:positive ratingNo prohibition on client contact.
Contact with Workers Rejection:positive ratingNo prohibition on contact with other workers.
- Birgitta Bergvall-Kåreborn & Debra Howcroft, Crowdsourcing and Open Innovation: A Study of Amazon Mechanical Turk and Apple iOS (Proc. 6th Intl. Society for Professional Innovation Management Symposium, 2013).
- U.S. Patent No. 7,197,459 (issued Mar. 27, 2007).
- See Jason Pontin, “Artificial intelligence, with help from the humans.” New York Times, 25 Mar 2007.
- Jeff Bezos, Opening Keynote and Interview, MIT Emerging Technologies Conference, 27 Sep 2006.
- Amazon Mechanical Turk Case Studies: SnapMyLife. Accessed 3 May 2017.
- C-SATS. Accessed 3 May 2017.
- Amazon Mechanical Turk Case Studies: DARPA. Accessed 3 May 2017.
- Amazon Mechanical Turk Case Studies: US Army Research Lab. Accessed 3 May 2017.
- Nicole Laskowski. 2014. Mechanical Turk supplies Gilt with ‘artificial artificial intelligence’. Search CIO. Accessed 3 May 2017.
- Sarah T. Roberts. 2014. Behind the Screen: the Hidden Digital Labor of Commercial Content Moderation. Ph.D. dissertation, University of Illinois at Urbana-Champaign. Abstract accessed 3 May 2017.
- See e.g. Amazon Mechanical Turk Case Studies: Acxiom. Accessed 3 May 2017.
- See e.g. Futurism, Samping bias in science: here’s why you need to go back to the source (13 Aug 2015, accessed 3 May 2017) and David Geiger, Personalized Task Recommendation in Crowdsourcing Systems, p. 73 (2015; Google Books page accessed 3 May 2017).
- See the Amazon Mechanical Turk Participation Agreement (last updated 2 Dec 2014; accessed 3 May 2017). Section 3b (“Providers [i.e., workers] in General”) states (emphasis added):
all ownership rights, including worldwide intellectual property rights, will vest with the Requester immediately upon [the worker’s] performance of the Service.
- See Clickhappier, Middlemen / Intermediary Requesters List. 18 Jan 2016 (orig. 10 Aug 2014); accessed 3 May 2017.
- MTurk Communities. From the /r/mturk wiki. Last edit Jan 2017 by Clickhappier; accessed 3 May 2017.
- See e.g. search results for “mturk” on GreasyFork.
- See e.g. Neha Gupta, David Martin, Benjamin V. Hanrahan, and Jacki O’Neill, Turk-Life in India (2014).
- Matthew Lease, Jessica Hullman, Jeffrey P. Bigham, Michael S. Bernstein, Juho Kim, Walter Lasecki, Saeideh Bakhshi, Tanushree Mitra, and Robert C. Miller. 2013. Mechanical Turk is not anonymous. Accessed 4 May 2017.
- Amazon Mechanical Turk Requester Best Practices Guide. 2015. Accessed 4 May 2017.
- IsaacM@AWS. Re: Prevent workers from doing survey more than once ? AWS Developer Forums, 25 Feb 2012.
- Mechanical Turk Participation Agreement, Section 3f.
- Amazon Mechanical Turk API Documentation | HIT Review Policies.
- Vanessa Williamson. 2014. On the ethics of crowdsourced research. John F. Kennedy School of Government, Harvard University.
- See e.g. Matt Finkin, 2016, Beclouded work in historical perspective, Comparative Labor Law & Policy Journal 37(3); and Valerio De Stefano, 2016, The rise of the “just-in-time workforce”: On-demand work, crowdwork and labour protection in the “gig economy”, Conditions of Work and Employment Series No. 71, International Labour Office.
- Guidelines for Academic Requesters – WeAreDynamo Wiki. Version 2.0. Accessed 4 May 2017.
- The Master qualification process is not clear to workers, see this reddit thread for some self-reported speculation about it: https://www.reddit.com/r/mturk/comments/36ic4h/master_qualification/