Lots of “artificial intelligence” are still people behind a screen



Many companies that tout their AI prowess are actually using low-paid workers to power their software, as the “AI wash” gives them easier access to funds.

  • By Parmy Olson / Bloomberg Opinion

The nifty CamFind app has come a long way with its artificial intelligence (AI). It uses image recognition to identify an object when you point your smartphone’s camera at it.

However, in 2015, its algorithms were less advanced: the app mainly used contractors in the Philippines to quickly capture what they were seeing through a user’s phone camera, the CamFind co-founder recently confirmed to me.

You wouldn’t have guessed that from a press release he issued that year that touted industry-leading “deep learning technology” but didn’t mention any human taggers.

Photo: Bloomberg

The practice of hiding human entry into AI systems remains an open secret among those working in machine learning and AI. A 2019 analysis of tech startups in Europe by London-based MMC Ventures even found that 40% of so-called AI startups had shown no evidence of actual use of artificial intelligence in their homes. products.

This so-called “AI washout” should come as no surprise. Global investments in AI companies have grown steadily over the past decade and more than doubled in the past year, according to market intelligence firm PitchBook.

According to MMC Ventures analysis, labeling your startup an “AI company” can result in a funding premium of up to 50% compared to other software companies.

Yet ignoring the workers who power these systems leads to unfair labor practices and skews the public’s understanding of how machine learning actually works.

In Silicon Valley, many start-ups have succeeded by following the “fake it ’til you make it” mantra. For AI companies, hiring people to support algorithms can become a stopgap, which sometimes becomes permanent.

Humans have been discovered secretly transcribing receipts, setting calendar appointments, or performing bookkeeping services on behalf of “AI systems” that got all the credit.

In 2019, a whistleblower lawsuit against a UK company claimed customers paid for AI software that analyzed social media while staff members did that work instead.

There’s a reason it happens so often. Building AI systems requires many hours of human training algorithms, and some companies have fallen into the gray area between training and operation.

A common explanation is that human workers provide “validation” or “supervision” to algorithms, such as quality control.

However, in some cases, these workers perform more cognitively intensive tasks because the algorithms they supervise do not work well enough on their own. This can reinforce unrealistic expectations of what AI can do.

“It’s part of this pipe dream of super-intelligence,” says Ian Hogarth, an angel investor, visiting professor at University College London and co-author of an annual State of AI report released Tuesday.

For hidden workers, working conditions can also be “anti-human,” he says.

This can lead to inequalities and poor performance of AI.

For example, Cathy O’Neil noted that Facebook Inc’s machine learning algorithms don’t work well enough to stop harmful content. (I agree.) The company could double its 15,000 content moderators, a recent academic study suggests.

However, Facebook could also bring its existing moderators out of the shadows.

Contract workers are required to sign strict nondisclosure agreements and are not allowed to talk about their work with friends and family, said Cori Crider, founder of tech advocacy group Foxglove Legal, which has helped several alumni. moderators to take legal action against Facebook. on allegations of psychological damage.

Facebook said content reviewers can take breaks when they need to and don’t have to make hasty decisions.

The work of moderation is mentally and emotionally draining, and Crider says entrepreneurs are “optimized within an inch of their life” with an array of targets to hit.

Keeping these workers hidden only exacerbates the problem.

A similar issue is affecting Amazon.com Inc’s MTurk platform, which posts small jobs for freelancers.

In their book Ghost Workers, Microsoft Corp researchers Mary Gray and Siddharth Suri claim that these freelancers are part of an invisible workforce that labels, edits, and sorts much of what we see on the internet.

AI doesn’t work without these “humans in the loop,” they say, but people are grossly underestimated.

A recent article by academics at Princeton University and Cornell University called on data tagging companies like Scale AI Inc and Sama Inc that pay workers in Southeast Asia and Sub-Saharan Africa 8 US $ per day. Of course, it is a living wage in these regions, but in the long run it also perpetuates income inequalities.

A spokeswoman for Sama said the company has helped more than 55,000 people lift themselves out of poverty and that higher local wages could negatively impact local markets, resulting in higher costs for food. and housing.

Scale AI did not respond to a request for comment.

“Micro-work has no rights, no security or routine and pays a pittance – just enough to keep a person alive but socially crippled,” writes Phil Jones, researcher for the UK employment think tank Autonomy , adding that it is fallacious to paint such work as beneficial to a person’s skills.

The labeling of the data is so monotonous that Finland has outsourced it to inmates.

Improving the employment status of these workers would improve their lives and also improve the development of AI, as feeding algorithms with inconsistent data can hurt future performance.

Crider says Facebook needs to make its content moderators full-time staff if it is serious about solving its content issues (most of them work for agencies like Accenture PLC).

Princeton and Cornell researchers say taggers need a more visible role in AI development and fairer compensation.

A glow in the dark: Freelancers who micro-task on Amazon’s MTurk platform recently held worker forums to address Amazon on issues such as rejected work, according to one of their reps .

They don’t create a union per se, but their work is a unique attempt to organize, giving AI ghost workers a voice they haven’t had until now. Hoping that the idea will gain ground.

Parmy Olson is a Bloomberg Opinion columnist covering technology. She previously reported for the Wall Street Journal and Forbes and is the author of We Are Anonymous.

This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.

Comments will be moderated. Keep comments relevant to the article. Comments containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. The final decision will be at the discretion of the Taipei Times.


Previous During an ongoing meeting with Poonch, the bodies of two missing soldiers were found, the death toll stands at nine
Next Burgum: the State launches the search for an executive director of Job Service ND; Klipfel will continue to lead WSI | New