A Kenya-based data labeling team, managed by San Francisco firm
Beginning in November 2021, OpenAI sent tens of thousands of text samples to the employees, who were tasked with combing the passages for instances of child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest, TIME reported. Members of the team spoke of having to read hundreds of these types of entries a day; for hourly wages that raged from $1 to $2 an hour, or a $170 monthly salary, some employees felt that their jobs were “mentally scarring” and a certain kind of “torture.”
Sama employees reportedly were offered wellness sessions with counselors, as well as individual and group therapy, but several employees interviewed said the reality of mental healthcare at the company was disappointing and inaccessible. The firm responded that they took the mental health of their employees seriously.
The TIME investigation also discovered that the same group of employees was given additional work to compile and label an immense set of graphic — and what seemed to be increasingly illegal — images for an undisclosed OpenAI project. Sama ended its contract with OpenAI in February 2022. By December, ChatGPT would sweep the internet and take over chat rooms as the next wave of innovative AI speak.
At the time of its launch, ChatGPT was noted for having a
The ethical complexity of AI
While the news of OpenAI’s hidden workforce is disconcerting, it’s not entirely surprising as the ethics of human-based content moderation isn’t a new debate, especially in social media spaces toying with the lines between free posting and protecting its user bases. In 2021, the New York Times reported on
Content moderation has even become the subject of psychological horror and post-apocalyptic tech media, such as Dutch author Hanna Bervoets’s 2022 thriller We Had to Remove This Post, which chronicles the mental breakdown and legal turmoil of a company quality assurance worker. To these characters, and the real people behind the work, the perversions of a tech- and internet-based future are lasting trauma.
ChatGPT’s rapid takeover, and the successive wave of AI art generators, poses several questions to a general public more and more willing to hand over their data,
The answers to these are both obvious and morally complex. Chats are not
One thing is clear: The rapid rise of AI as the next technological frontier continues to pose new ethical quandaries on the creation and application of tools replicating human interaction at a real human cost.
If you have experienced sexual abuse, call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting