Biggest controversies surrounding artificial intelligence: racism, sexism and ‘becoming aware’

Editor 

From Google’s LaMDA to Microsoft’s Tay, AI models are often at the center of controversy. These are some of the biggest AI controversies of recent times.

 

Google’s LaMDA artificial intelligence (AI) model has been in the news because an engineer at the company believes the program has become sentient. Intelligence program has sparked controversy; not even close. AI is an all-encompassing term when computer systems simulate human intelligence. In general, AI systems are trained by consuming large amounts of
data while simultaneously analyzing them for correlations and patterns. They then use these patterns to make predictions.But sometimes that process goes awry, ending with results that range from the hilarious to the downright horrifying. These are some of the recent controversies surrounding artificial intelligence systems. Google LaMDA is said to be ‘aware’ Even a machine could understand that it makes sense Let’s start with the recent controversy. Google engineer Blake Lemopine has been placed on administrative leave by the company after he claimed LaMDA had become sentient and started reasoning like a
human.”If I didn’t know exactly what it is, what this computer program that we recently built is, I would think it’s a 7 or 8 year old who knows about physics. I think this technology will be amazing. it will benefit everyone. But maybe other people disagree and maybe we at Google shouldn’t be the ones making all the decisions,” Lemoine told the Washington Post, which first reported the story. Lemoine worked with a colleague to provide evidence of sensitivity to
Google, but the company denied his claims.He then published in a blog post alleged transcripts of conversations he had with LaMDA. Google dismissed his claims by discussing how the company prioritizes minimizing such risks when developing products like LaMDA. Microsoft’s AI chatbot Tay became racist and sexist In 2016, Microsoft featured the AI ​​chatbot Tay on Twitter. Tay was designed as a “conversational understanding” experiment.“It’s designed to get smarter and smarter as you engage in conversation with people on Twitter. Learn from what they tweet to better engage people. But before long, Twitter users began tweeting Tay with all sorts of racist and misogynistic rhetoric. Unfortunately, Tay started absorbing these conversations before the bot started developing its own versions of hate speech. cancer” and “hitler was right.I hate Jews.” Unsurprisingly, Microsoft removed the bot from the platform fairly quickly,” wrote Peter Lee, Microsoft’s vice president of research at the time of the controversy. The company later said in a blog post that it was Tay only would bring back if engineers could figure out a way to prevent web users from influencing the chatbot
forms that undermine the company’s principles and values.Amazon Rekognition Identifies Members of US Congress as Criminals In 2018, the American Civil Liberties Union (ACLU) conducted a test of Amazon’s facial recognition program, Rekognition. During the test, the software incorrectly identified 28 members of Congress as those who had previously committed crimes. .Rekognition is a face matching program that Amazon offers to the public so anyone can match faces. It is used by many US government agencies.The ACLU used Rekognition to build a facial database and search tool of 25,000 publicly available arrest photos. They then searched this database for public photos of every member of the US House of Representatives and US Senate at the time, using the default match settings used by Amazon. resulted in 28 incorrect matches. Additionally, the false matches came disproportionately from people of color, including six members of the Congressional Black Caucus. Although at that time only 20 percent of Congressmen
were black, 39 percent of the wrong parties were black.This served as a stark reminder of how AI systems can account for the biases they find in the data they were trained on. Amazon’s secretive AI recruitment tool is biased towards women. ‘ with the goal of mechanizing the search for top talent, according to a Reuters report. The idea was to create the holy grail of recruiting AI: you give the machine 100 resumes and it picks the top 5 out of
. as of 2015, the team realized the system was evaluating candidates without gender neutrality.Essentially, the show began randomly prioritizing male contestants over female contestants for no reason. This is because the model was trained to filter job applications by looking at patterns in the resumes submitted to the company over a 10-year period. Consistent with male dominance in tech, most CVs came from men. Because of this bias in the data, the system has taught itself to prefer male candidates. If CVs contained words like
“women,” the system penalized you.For example, if a CV says “women’s chess team”. It also downgraded graduates from all women’s colleges. Amazon initially edited the shows to make them neutral under these conditions. But even that wasn’t a guarantee that the machines wouldn’t find other ways to classify candidates that might be discriminatory. Eventually, Amazon removed the program. .In a statement to Reuters, the company said it was never actually used for recruitment.

Source Link

Leave A Comment