So, just read something disturbing about ChatGPT.
Long story short, they contracted with underpaid workers in Kenya to identify and tag abusive and illegal content all day long, so this material could be fed back into ChatGPT to help it identify and filter out the ugly stuff.
ChatGPT basically ate the internet to produce what it does, but, as anyone familiar with the internet knows, the most vile and evil stuff imaginable is among that same data set. Rather than the impossible task of filtering out the bad stuff from the training sets, ChatGPT has a secondary part that is trained soley with nasty labeled data sets so it can recognize the really awful stuff, and censor it out of ChatGPT’s original responses to queries.
The article starts out fairly tame and but then goes far beyond typical global south labor exploitation.
Like, ALL the content warnings on the link below.
I swear you can’t use or buy anything without a hidden moral crisis these days.
ETA: Oh no… that was an unpleasant rabbit hole to end the day with…
Apparently, it isn’t just OpenAI that pays pennies to read & label horrifying content. Google, Meta, & Amazon employ literal refugee camp residents to label all of their nasty data. Parody really is dead.
Again, clickers beware.