Monday, February 27, 2023

So You Didn’t Get The Job.

And in the back of your mind you wonder why? Was there any bias in why I didn’t get the job? Was it because I’m trans?

These are the questions that we wonder about and now there is a new villain… AI

AI has been in the news lately thanks to ChatGPT, I wrote about “An Experiment: ChatGPT & Trans Children” and here "AI Bias In Surveillance Of Transgender People" but AI is being used in so many other locations and application. PBS’s Independent Lens had a show about AI discrimination “Coded Bias.”

Premiered March 22, 2021
Directed by Shalini Kantayya
Coded Bias exposes prejudices and threats to civil liberty in facial recognition algorithms and artificial intelligence.

ABOUT THE DOCUMENTARY

In an increasingly data-driven, automated world, the question of how to protect individuals’ civil liberties in the face of artificial intelligence looms larger by the day. Coded Bias follows M.I.T. Media Lab computer scientist Joy Buolamwini, along with data scientists, mathematicians, and watchdog groups from all over the world, as they fight to expose the discrimination within algorithms now prevalent across all spheres of daily life. 

While conducting research on facial recognition technologies at the M.I.T. Media Lab, Buolamwini, a “poet of code,” made the startling discovery that some algorithms could not detect dark-skinned faces or classify women with accuracy. This led to the harrowing realization that the very machine-learning algorithms intended to avoid prejudice are only as unbiased as the humans and historical data programming them.

I don’t know if you read about lawyers being barred from Madison Square Garden using facial recognition. And companies are using AI recognition software for employment.

AI tools fail to reduce recruitment bias - study
BBC News
By Chris Vallance
13 October 2022


Artificially intelligent hiring tools do not reduce bias or improve diversity, researchers say in a study.

"There is growing interest in new ways of solving problems such as interview bias," the Cambridge University researchers say, in the journal Philosophy and Technology

The use of AI is becoming widespread - but its analysis of candidate videos or applications is "pseudoscience".

A professional body for human resources told BBC News AI could counter bias.

In 2020, the study notes, an international survey of 500 human-resources professionals suggested nearly a quarter were using AI for "talent acquisition, in the form of automation".

But using it to reduce bias is counter-productive and, University of Cambridge's Centre for Gender Studies post-doctoral researcher Dr Kerry Mackereth told BBC News, based on "a myth".

"These tools can't be trained to only identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race," she said.

But the question remains what about discriminating against minorities?

In 2018, for example, Amazon announced it had scrapped the development of an AI-powered recruitment engine because it could detect gender from CVs and discriminated against female applicants.

We know that it discriminates against women and Blacks but what about trans people? 

Can bots discriminate? It's a big question as companies use AI for hiring
NPR
By Andrea Hsu
January 31, 2023


AI may be the hiring tool of the future, but it could come with the old relics of discrimination.

With almost all big employers in the United States now using artificial intelligence and automation in their hiring processes, the agency that enforces federal anti-discrimination laws is considering some urgent questions:

How can you prevent discrimination in hiring when the discrimination is being perpetuated by a machine? What kind of guardrails might help?

Some 83% of employers, including 99% of Fortune 500 companies, now use some form of automated tool as part of their hiring process, said the Equal Employment Opportunity Commission's chair Charlotte Burrows at a hearing on Tuesday titled "Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier," part of a larger agency initiative examining how technology is used to recruit and hire people.

Whoa! And we still don’t know if AI discriminate against us.

Resume scanners, chatbots and video interviews may introduce bias

Last year, the EEOC issued some guidance around the use of cutting-edge hiring tools, noting many of their shortcomings.

What about us?

Take, for example, a video interview that analyzes an applicant's speech patterns in order to determine their ability to solve problems. A person with a speech impediment might score low and automatically be screened out.

Or, a chatbot programmed to reject job applicants with gaps in their resume. The bot may automatically turn down a qualified candidate who had to stop working because of treatment for a disability or because they took time off for the birth of a child.

Older workers may be disadvantaged by AI-based tools in multiple ways, AARP senior advisor Heather Tinsley-Fix said in her testimony during the hearing.

Still nothing about us.

In an article in Forbes last year,

What once was touted as the solution to hiring bias, both conscious and unconscious, AI is now considered by some government leaders as having the potential to “reproduce and deepen systemic patterns of discrimination.” Local governments from New York City to Illinois have enacted legislation regulating the use of AI in hiring, hoping to curb the biases AI might create.

But private companies still use it and the question remains… does it discriminate against us?

On a lawyer's website I found,

The EEOC appears concerned that similar algorithms and other artificial intelligence (AI) used in employment and hiring might similarly have unforeseen adverse consequences on the employment process.  In 2016, the EEOC began to examine the issue of AI, people analytics, and big data in hiring and other employment decisions.   This year, the EEOC launched a new initiative to ensure that these emerging tools comply with federal civil rights laws that the agency enforces, including Title VII.   The EEOC Chair stated that “the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

And I haven’t found anything about trans people and AI facial recognition nor employment AI and I think that there is no data because no one cares about us we are too small of a minority.

No comments:

Post a Comment