Wednesday, July 11, 2018

The Bad And The Ugly

The “Good” in this bad and ugly post…

[Warning, this article is about derogatory words to describe trans and gay people]

Yelp Finally Removes Anti-Transgender Slur From Search Feature
Yelp scores a perfect 100 on the Human Rights Campaign’s Corporate Equality Index. But, until Tuesday, Yelp’s search box suggestions included the anti-trans slur ‘tranny.’
The Daily Beast
By Samantha Allen
July 10, 2018

Restaurant reviewing app Yelp allows users to filter for gender-neutral restrooms. The company also scored a perfect 100 on the Human Rights Campaign’s Corporate Equality Index, which measures the LGBT-friendliness of workplaces.

But until earlier today, the Yelp search box across multiple devices would make suggestions that included the anti-transgender slur “tranny.”

The search behavior was first highlighted on Twitter Monday night by the transgender personals website Transgenderdate.com.

On devices tested by The Daily Beast Tuesday morning, beginning to type “tran-” in the Yelp search box brought up the suggestion “Tranny Bars” right after “Transmission Repair” and “Transportation.”
When it was brought to Yelps attention,
The Daily Beast asked Yelp for comment on Tuesday morning, and by mid-afternoon, the search behavior in the application had already started to change. Terms like “Shemale Bars” and “Shemale Clubs” were still suggestions as of late Tuesday afternoon, but the derogatory “Tranny” suggestions were gone and Yelp pledged to address the issue.

“Thank you for bringing this to our attention and allowing us to correct it,” a Yelp spokesperson said in a statement to The Daily Beast. “This is a machine-generated error and we are taking prompt action to remove it from our systems.”
[…]
With Yelp, the issue was not the filtering out of LGBT content, but the suggestion of insulting anti-transgender search terms on par with anti-gay slurs like “faggot.”
There also have been some problems with Artificial Intelligence or AI…
Artificial Intelligence Has a Bias Problem, and It's Our Fault
From racist Twitter bots to unfortunate Google search results, deep-learning software easily picks up on biases. Here's what can be done about racism and sexism in AI algorithms.
PC Magazine
By Ben Dickson
June 14, 2018

In 2016, researchers from Boston University and Microsoft were working on artificial intelligence algorithms when they discovered racist and sexist tendencies in the technology underlying some of the most popular and critical services we use every day. The revelation went against the conventional wisdom that artificial intelligence doesn't suffer from the gender, racial, and cultural prejudices that we humans do.

The researchers made this discovery while studying word-embedding algorithms, a type of AI that finds correlations and associations among different words by analyzing large bodies of text. For instance, a trained word-embedding algorithm can understand that words for flowers are closely related to pleasant feelings. On a more practical level, word embedding understands that the term "computer programming" is closely related to "C++," "JavaScript" and "object-oriented analysis and design." When integrated in a resume-scanning application, this functionality lets employers find qualified candidates with less effort. In search engines, it can provide better results by bringing up content that's semantically related to the search term.

The BU and Microsoft researchers found that the word-embedding algorithms had problematic biases, though—such as associating "computer programmer" with male pronouns and "homemaker" with female ones. Their findings, which they published in a research paper aptly titled "Man is to Computer Programmer as Woman is to Homemaker?" was one of several reports to debunk the myth of AI neutrality and to shed light on algorithmic bias, a phenomenon that is reaching critical dimensions as algorithms become increasingly involved in our everyday decisions.
In an article behind the firewall in New Scientist has an intriguing tidbit…
Something is rotten at the heart of artificial intelligence. Machine learning algorithms that spot patterns in huge datasets, hold promise for everything from recommending if someone should be released on bail to estimating the likelihood of a driver having a car crash, and thus the cost of their insurance.

But these algorithms also risk being discriminatory by basing their recommendations on categories like someone’s sex, sexuality, or race. So far, all attempts to de-bias our algorithms have failed.
So are we now going to be discriminated against not only us but also all minorities by a machine?

And when we are discriminated against will the excuse be… “Oh sorry, it wasn’t us the computer did it?”


Update 10:05AM
I just came across another article…
Data-driven discrimination – a new challenge for civil society
LSE Media Policy Project blog, nor of the London School of Economics and Political Science
By Jędrzej Niklas and Seeta Peña Gangadharan
July 5, 2018

In recent years, debate on algorithms, artificial intelligence, and automated decision making has stoked public concern, panic, and occasional outrage. While such innovations are very often shown in a positive light, there are also stories of vulnerable groups who struggle because of discriminatory biases imbedded in the technologies. More often than not, public discourse presents these problems in a distinctive US context. In our new report “Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe”, we make European perspectives on data-driven systems visible, through the lenses of 28 civil society organisations (CSOs) active in the field of human rights and social justice in 9 EU countries.

How do algorithms discriminate?
We began our study by reviewing the problem of algorithmic or data-driven discrimination. In a very broad sense, algorithms are encoded procedures or instructions. They often use data as their main ingredient (or input), transforming these inputs into a desired output, based on specific calculations. Automated systems based on algorithms are complicated and vary in character, purpose, and sophistication. The variety of systems also means that algorithmic discrimination can arise for various reasons.

To run, algorithms need data. But data can be poorly selected, incorrect, incomplete or out-dated, and can even incorporate historical biases. One of the early examples (1988) of this problem was the case of St. George’s Medical School in the United Kingdom. An automated system was used to screen the incoming applications from potential students. Modelled on previous job recruitment data, the system incorporated historical biases in its analytical process and discriminated against women and people with non-European names.

Concerns not only relate to the quality of input data but also extend to the design of the algorithm that is using those inputs. Programming decisions are essentially human judgments, and reflect a vision about how the world ought to be. For example, humans must decide on error types and rates for algorithmic models. In other words, someone has to decide whether to measure the algorithmic “reliability” in terms of the cases wrongly included in an algorithmic decision (e.g., false positives) or wrongly excluded (e.g., false negatives) from an analytic model. Someone also needs to decide what an acceptable level of wrongful inclusion or exclusion might be.
The article goes on in detail about the whys & hows of the AI bias; but it all boils down to “garbage in, garbage out.”

No comments:

Post a Comment