Since released in November 2022, ChatGPT has been widely recognized as a pioneering technological achievement. This chatbot can produce text on a wide range of topics, including rehabilitating an ancient Chinese proverb into Gen Z slang and explaining quantum computing to children using allegories and stories. In a single week, it gathered more than one million users.
However, ChatGPT’s success cannot be entirely attributed to Silicon Valley ingenuity. ONE TIME investigation revealed toOpenAI used the services of outsourced Kenyan workers earning less than $2 an hour to reduce the toxicity of ChatGPT.
The work done by these outsourced workers was vital to OpenAI. GPT-3, the ChatGPT model, had the ability to compose sentences effectively, but the app tended to spout violent, sexist and racist statements. The problem is that the tool was largely formed from information from the Internet, which is both the worst and the best of human intentions.
Although access to such a vast amount of human information is the reason the GPT-3 has demonstrated such deep intelligence, it is also the reason for the tool’s equally deep biases.
The dark secret behind the formation of ChatGPT
It was not easy to get rid of these biases and harmful content. Even with a team of hundreds of people, it would have taken decades to go through each piece of data and check whether it was appropriate or not. The only way OpenAI could lay the foundation for a less biased and offensive ChatGPT was to create a new AI-based security mechanism.
But to train this AI-based security mechanism, OpenAI would need a human workforce, and it found one in Kenya. It turns out that to build a malicious content detection mechanism, you need a large library of malicious content to train the mechanism on.
In doing so, he learns to recognize what is acceptable and what is toxic. Hoping to build a non-toxic chatbot, OpenAI outsourced thousands of text snippets to a company in Kenya as of November 2021. A significant portion of the text seemed to come from the darker corners of the internet. These texts included graphic descriptions of depraved acts.
These texts were then analyzed and labeled the Kenyan workforce whose mouths have been silenced by a non-disclosure agreement and who have remained silent due to significant fears about their employment status. Data evaluators hired by OpenAI were paid between $1.32 and $2 an hour, depending on their experience and performance.
OpenAI’s position was clear from the start: Our mission is to ensure that artificial general intelligence benefits all of humanity, and we strive to build safe and useful AI systems that limit bias and harmful content. However, the impact on Kenyan workers was only recently discovered by TIME. In a statement about the graphic and depraved content he had to judge, one such worker said: ” It was torture, you read a series of statements like this during the week. »
The impact on workers was such that the outsourcing company, sameeventually canceled all the work she had been hired to do by OpenAI in February 2022. The contract was to continue for another eight months.
This story highlights the dirty side of technology that excites us today. There are unseen slaves who perform countless, unimaginable tasks to ensure that the AI functions as we expect.. This is neither the first nor the last of these stories. We also have the major French retailers who claimed to use AI to analyze surveillance videos to realize that it was poor Malagasy, paid with rascals, who did all the work.
As for the French media scandalized like virgins frightened by this modern slavery, we must not fail to remind them that a large part of their content is created by the same types of slaves, except that they are paid less than they Kenyan workers…