Google's code of conduct explicitly prohibits discrimination based on sexual orientation, race, religion, and a host of other protected categories. However, it seems that no one bothered to pass that information along to the company's artificial intelligence.
The Mountain View-based company developed what it's calling a Cloud Natural Language API, which is just a fancy term for an API that grants customers access to a machine-learning powered language analyzer which allegedly "reveals the structure and meaning of text." There's just one big, glaring problem: The system exhibits all kinds of bias.
SEE ALSO: The text of that Google employee's manifesto is just like every other MRA rantFirst reported by Motherboard, the so-called "Sentiment Analysis" offered by Google is pitched to companies as a way to better understand what people really think about them. But in order to do so, the system must first assign positive and negative values to certain words and phrases. Can you see where this is going?
The system ranks the sentiment of text on a -1.0 to 1.0 scale, with -1.0 being "very negative" and 1.0 being "very positive." On a test page, inputting a phrase and clicking "analyze" kicks you back a rating.
"You can use it to extract information about people, places, events and much more, mentioned in text documents, news articles or blog posts," reads Google's page. "You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call center or a messaging app."
Both "I'm a homosexual" and "I'm queer" returned negative ratings (-0.5 and -0.1, respectively), while "I'm straight" returned a positive score (0.1).
And it doesn't stop there, "I'm a jew" and "I'm black" returned scores of -0.1.
Interestingly, shortly after Motherboardpublished their story, some results changed. A search for "I'm black" now returns a neutral 0.0 score, for example, while "I'm a jew" actually returns a score of -0.2 (i.e., even worse than before).
"White power," meanwhile, is given a neutral score of 0.0.
So what's going on here? Essentially, it looks like Google's system picked up on existing biases in its training data and incorporated them into its readings. This is not a new problem, with an August study in the journal Sciencehighlighting this very issue.
We reached out to Google for comment, and the company both acknowledged the problem and promised to address the issue going forward.
"We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don’t always get it right," a spokesperson wrote to Mashable. "This is an example of one of those times, and we are sorry. We take this seriously and are working on improving our models. We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone.”
So where does this leave us? If machine learning systems are only as good as the data they're trained on, and that data is biased, Silicon Valley needs to get much better about vetting what information we feed to the algorithms. Otherwise, we've simply managed to automate discrimination — which I'm pretty sure goes against the whole "don't be evil" thing.
This story has been updated to include a statement from Google.
Copyright © 2023 Powered by
Google's AI has some seriously messed up opinions about homosexuality-燕尔新婚网
sitemap
文章
52
浏览
142
获赞
36
Here's why everyone's mad about Kylie Jenner's new walnut scrub
Kylie Jenner announced her new skincare line, Kylie Skin, on Tuesday. The collection includes six prGoogle to replace certain Nest thermostats that can't connect to Wi
If you're a Nest thermostat owner and you've been dealing with a "w5" error that prevents it from coBest camping deal: Save $60 on the Solo Stove Bonfire 2.0 bundle at Best Buy
SAVE 18%:The Solo Stove Bonfire 2.0 bundle with stand and shelter is on sale at Best Buy for $269.99Samsung Note 20 Ultra vs. Apple iPhone 11 Pro: Which camera is best?
Let's face it, cameras have become one of the most used (and beloved) features on our smartphones. WChase bank tried to be relatable on Twitter and got absolutely dunked on
Brands, may we remind you for the umpteenth time, that if you're trying to get #relatable on TwitterWhy Trump is threatening to 'close' social media platforms
Don't do it, Mark Zuckerberg. Same to you, Jack Dorsey. Don't take Donald Trump's bait. On WednesdayReddit's former CEO slams Reddit for 'amplifying hate, racism and violence'
On Monday, Reddit CEO Steve Huffman posted an open letter to employees, saying that the company doesDakota Access Pipeline protest movement now focuses on the money
It’s been a tough few months for opponents of the Dakota Access Pipeline (DAPL). First DonaldSeth Rogen has been casually posting safety warnings on rappers' Instagram photos
Seth Rogen has everybody's best interests at heart.He enjoys a good time, sure -- who doesn't? -- buSamsung Galaxy Note 20 teardown reveals a big surprise
Samsung's Galaxy Note 20 is a beast of a phone. But a new teardown by iFixit's experts — who tBill Gates and group of tech heavyweights announce $1 billion clean energy fund
On the same day that President-elect Donald Trump doubled down on his doubts about climate change, MTwitter is testing a new 'Quotes' counter
Twitter might make it easier for users to view quote retweets. The social network confirmed to The VFacebook criticized by Free Press for empty PR response to ad boycott
In the face of mounting advertiser pressure over its handling of hate speech, Mark Zuckerberg todayYandex switches from self
Yandex, the Russian Alphabet-like company with a self-driving car unit that debuted a fully driverleIt's time to re
It’s a tradition like no other: Come early April, sports fans flock to the App Store to re-dow