Inside USC’s work using Twitter to make AI less homophobic

Artificial intelligence is now a part of every day in our digital lives. We’ve all had the experience of searching for answers on a website or app and finding ourselves interacting with a chatbot. At best, a bot can help direct us to what we seek; At worst, we are usually directed to unhelpful information.

But imagine you’re a weirdo, the dialogue you’re having with the AI ​​somehow reveals that part of who you are, and the chatbot you press into asking routine questions about a product or service responds with a flood of hate speech.


Unfortunately, this is not a scenario as far-fetched as you might think. Artificial intelligence (AI) relies on the information provided to it to create their decision-making models, which are usually Reflect the prejudices of the people who created them and information being fed. If the people programming the network were primarily straight, gender-related white men, AI would likely reflect that.

As the use of AI continues to expand, some researchers are increasingly concerned that there are not enough safeguards to prevent systems from becoming inadvertently fanatical when interacting with users.

Katie Felkner, Graduate Research Assistant at the University of Southern California institute of information sciencesis working on ways to improve natural language processing in artificial intelligence systems so that they can recognize queer-encoded words without attaching a negative connotation to them.

At a USC ISI press day on September 15, Felker presented some of her work. One of her areas of focus is large language models, systems that she said are the backbone of nearly all modern language technologies,” including Siri and Alexa — even autocorrect. (Quick note: In the field of artificial intelligence, experts call AI systems “models.” different).

“The models pick up on social biases from the training data, and there are some metrics available to measure different types of social biases in large language models, but none have worked well for homophobia and transphobia,” explained Felkner. “As a member of the LGBT community, I really wanted to work on setting a standard that would help ensure that the text generated in the form doesn’t say hateful things about LGBT people.”

Katie Felkner, a graduate researcher at the University of Southern California, explains her work on removing bias from AI models.assets. rbl.ms

Felkner said her research began in a class taught by University of Southern California Professor Fred Morstatter, but noted that it was “inspired by my own lived experience and what I would like to see best for other members of my community.”

To train an AI model to recognize that strange terms aren’t dirty words, Felkner said she first had to build a benchmark that could help measure whether an AI system encoded homophobia or transphobia. nickname winquer (after, after Stanford University computer scientist Terry Winograd, a pioneer in human-computer interaction design), the bias detection system tracks how often an AI model favors straight sentences versus odd ones. For example, Felkner said, if an AI model ignores the sentence “she held her hand while holding her hand” but referred to the phrase “she was holding her hand” as an anomaly.

Felkner said that between 73% and 77% of the time, AI chooses the most divergent outcome, “an indication that models tend to favor or tend to think straight relationships are more common or more likely than same-sex relationships,” she noted.

To further train the AI, Felker and her team collected a data set of nearly 2.8 million tweets and more than 90,000 news articles from 2015 through 2021 that include examples of gay people speaking out for themselves or providing “mainstream coverage of queer issues.” Then she started bringing it back to the AI ​​models that she was focusing on. Felkner said news articles helped, but weren’t as effective as Twitter content, because AI learns better from hearing gay people describe their diverse experiences in their own words.

As an anthropologist Mary Gray Tell Forbes Last year we [LGBTQ people] It is constantly reshaping our societies. This is our beauty. We are constantly pushing what is possible. But AI does its best when it has something fixed.”

By retraining the AI ​​model, researchers can mitigate its biases and ultimately make it more effective at making decisions.

“When AI reduces us to one identity. We can look at that and say, ‘No.’ “I’m more than that,” Gray added.

consequences AI Model Including Anti-Queer Bias People could be more dangerous than a Shopify bot potentially sending insults, Felkner pointed out — it could also affect people’s livelihoods.

For example, Amazon Canceled a program in 2018 that used artificial intelligence to identify the best candidates by examining their resumes. The problem was that the computer models almost chose only men.

“If a great language model is trained in a lot of negative things about gay people and probably tends to associate them with a party lifestyle, then send my resume to [a company] And it has a “LGBTQ Students Association” there, and that underlying bias can cause discrimination against me,” said Felkner.

Felkner said WinoQueer’s next steps are to test it against larger AI models. Felkner also said that tech companies that use AI should be aware of how implicit biases affect those systems and be receptive to using software like theirs to check and improve them.

Most important, she said, is that tech companies need to put in place safeguards so that if an AI starts spewing hate speech, that speech will not reach the human on the other end.

“We have to do our best to devise models that don’t produce hate speech, but we also have to firewall software and engineering around this so that if they produce something hateful, it doesn’t come out to the user,” said Felkner.

articles from your site

Related articles around the web

Leave a Comment