Friday, March 26, 2021

AI Bias In Surveillance Of Transgender People

Imagine you go for a job interview and you get rejected, you have all the skills that they are looking for, you have good social skills, and you got a glowing comments from the HR person but then you get “THE LETTER” we are sorry…
Facial recognition AI can’t identify trans and non-binary people
QUARTZ
By Amrita Khalid
October 16, 2019


Facial-recognition software from major tech companies is apparently ill-equipped to work on transgender and non-binary people, according to new research. A recent study by computer-science researchers at the University of Colorado Boulder found that major AI-based facial analysis tools—including Amazon’s Rekognition, IBM’s Watson, Microsoft’s Azure, and Clarifai—habitually misidentified non-cisgender people.

The researchers gathered 2,450 images of faces from Instagram, searching under the hashtags #woman, #man, #transwoman, #transman, #agenderqueer, and #nonbinary. They eliminated instances in which multiple individuals were in the photo, or where at least 75% of the person’s face wasn’t visible. The images were then divided by hashtag, amounting to 350 images in each group. Scientists then tested each group against the facial analysis tools of the four companies.
You would never know that you didn’t get the job because the AI said you male and must be dishonest because you are lying about your gender.

Now the way AI works is that they feed the computer images of items that fit the profile that you are looking for and imagines that do not fit the profile you are looking for.


In the video the woman found out that most of the photos that they showed the AI were of white males, therefore the AI didn’t recognized a black woman. How many images of trans people did they feed the AI computer?
Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It
Time
By Joy Buolamwini
February 7, 2019


Machines can discriminate in harmful ways.

I experienced this firsthand, when I was a graduate student at MIT in 2015 and discovered that some facial analysis software couldn’t detect my dark-skinned face until I put on a white mask. These systems are often trained on images of predominantly light-skinned men. And so, I decided to share my experience of the coded gaze, the bias in artificial intelligence that can lead to discriminatory or exclusionary practices.
Suppose you walk into a bank and the AI says you are wearing a disguise and they will not let you withdraw your funds or will not give you a mortgage.

Bias is real, bias is affecting people lives, it has a human cost…
Real-life Examples of Discriminating Artificial Intelligence
Real-life examples of AI algorithms demonstrating bias and prejudice
Toward Data Science
By Terence Shin
June 4, 2020


Some say that it’s a buzzword that doesn't really mean much. Others say that it’s the cause of the end of humanity.

The truth is that artificial intelligence (AI) is starting a technological revolution, and while AI has yet to take over the world, there’s a more pressing concern that we’ve already encountered: AI bias.
[...]
1. Racism embedded in US healthcare
In October 2019, researchers found that an algorithm used on more than 200 million people in US hospitals to predict which patients would likely need extra medical care heavily favored white patients over black patients. While race itself wasn’t a variable used in this algorithm, another variable highly correlated to race was, which was healthcare cost history. The rationale was that cost summarizes how many healthcare needs a particular person has. For various reasons, black patients incurred lower health-care costs than white patients with the same conditions on average.
[…]
2. COMPAS
Arguably the most notable example of AI bias is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm used in US court systems to predict the likelihood that a defendant would become a recidivist.
Due to the data that was used, the model that was chosen, and the process of creating the algorithm overall, the model predicted twice as many false positives for recidivism for black offenders (45%) than white offenders (23%).

3. Amazon’s hiring algorithm
Amazon’s one of the largest tech giants in the world. And so, it’s no surprise that they’re heavy users of machine learning and artificial intelligence. In 2015, Amazon realized that their algorithm used for hiring employees was found to be biased against women. The reason for that was because the algorithm was based on the number of resumes submitted over the past ten years, and since most of the applicants were men, it was trained to favor men over women.
These biases are directed at minorities… Blacks, Asians, Trans people and other minorities along with a bias against women.

There was someone who attended Fantasia Fair about 10 years ago and she was what we thought was paranoid because she was afraid that AI would recognize her but it turned out that she was ten years ahead of us.
~~~~~~~~

I was watching this PBS show Independent Lens Coded Bias” that was about Ms. Buolamwini efforts to highlight the discrimination in AI.

No comments:

Post a Comment