How thousands of overworked, underpaid' humans train Google's AI to seem smart
Briefly

How thousands of overworked, underpaid' humans train Google's AI to seem smart
"On her first day a week later, however, her expectations went bust. Instead of writing words herself, Sawyer's job was to rate and moderate the content created by artificial intelligence. The job initially involved a mix of parsing through meeting notes and chats summarized by Google's Gemini, and, in some cases, reviewing short films made by the AI. On occasion, she was asked to deal with extreme content, flagging violent and sexually explicit material generated by Gemini for removal, mostly text."
"Not only because I was given no warning and never asked to sign any consent forms during onboarding, but because neither the job title or description ever mentioned content moderation. The pressure to complete dozens of these tasks everyday, each within 10 minutes of time, has led Sawyer into spirals of anxiety and panic attacks, she says without mental health support from her employer."
In spring 2024 Rachael Sawyer, a technical writer from Texas, accepted a job titled writing analyst and was immediately assigned to rate and moderate AI-generated content. She reviewed summaries and short films produced by Google's Gemini and flagged violent and sexually explicit material, mostly text. Onboarding did not include warnings or consent forms and the job description did not mention moderation. The role demanded dozens of tasks daily with roughly ten minutes per task. The workload and exposure to distressing material caused anxiety and panic attacks without employer-provided mental-health support. Google contracts thousands of AI raters through firms such as Hitachi's GlobalLogic.
Read at www.theguardian.com
Unable to calculate read time
[
|
]