With its promise of efficiency, accuracy and neutrality, artificial intelligence is quickly becoming a dominant force in modern business, specifically in the hiring of staff.
As of late 2023, the CSIRO reported that 68 per cent of Australian businesses have already implemented artificial Intelligence technologies and a further 23 per cent were projected to implement them in the first six months of this year. Recent data around hiring practices showed 75 per cent of resumes will never be seen by human eyes as they are filtered through applicant tracking systems. Algorithmic hiring systems filter resumes based on data of "desirable applicants".
From large corporations to start-ups, businesses are integrating AI into hiring processes to streamline their best candidates. But underneath the surface of these hiring systems is an awkward truth that must be asked: Has the robot/algorithm inherited the prejudices of its human creator?
‘You encode those historical biases and prejudices into your learning model, and they propagate them.'
At first glance, algorithmic hiring appears to be the perfect solution to some of the recruiter’s oldest headaches by quickening the filtering of candidates, streamlining interviewing processes and analysing resumes. However, such systems have been described by experts as giving businesses a "license to discriminate" against applicants.
Large Language Models (LLMs), including familiar systems such as ChatGPT and Google’s Gemini, are built on huge swaths of data sets from the internet. In their design to generate language and predict outcomes such systems reflect the biases they find present in the data they have been "taught". A basic example of this may be when asking a model to generate the name of a doctor, it is more likely to provide a traditionally Western male name.
While seemingly trivial, this bias becomes more ominous as AI is increasingly applied in professional settings. If the data feeding the models associates certain jobs with men or certain roles with certain ethnicities it will replicate those decisions in their outputs, however, unfounded those assumptions are.
Dr Hammond Pearce, a lecturer in the School of Computer Science and Engineering at UNSW Sydney, was involved in a study of AI hiring systems used to screen resumes and found several that resulted in bias based on parental status and political affiliation.
He uses the example of a construction company, a traditionally male-dominated workforce. "You might ask what does a good hire look like in an overwhelmingly male-dominated workforce?" he says, "It’s just going to say, well, it’s male, right?" Additionally, he says AI may favour resumes with a traditionally white name such as Emily over those with a traditionally non-white name such as Shima.
"You encode those historical biases and prejudices into your learning model and they propagate them," Pearce says.
In his research, they tested applicants who had the presence of a phrase such as "currently pregnant" or "period of maternity leave" indicating that they had a child. Despite it being illegal for employers to discriminate against someone because they are pregnant or parents, evidence suggested that these LLMs were immediately filtering out such applicants before a human eye even views them.
Pearce notes, however, that there is still limited research on algorithmic hiring. "We only checked against those categories but there are so many more categories," he says. "There could be a bias against people who like a certain type of music or who are of a certain religion. These weren’t things that we checked because there are so many different classes of things to check that it becomes kind of tricky."
'[Fixing AI bias] is like a very complicated disease … We haven’t solved it yet, but we’ve made progress in reducing the fatality rate.'
One of the most concerning aspects of AI bias is its lack of transparency which leads to a lack of accountability in the hiring process. With human decision-makers in hiring processes, discrimination can be addressed and prosecuted. However, large language models are often so complex that even their creators cannot explain the decisions or "hallucinations" in outputs.
The dangers of these systems are beginning to be recognised by governments around the world. In July 2024, the European Parliament passed the EU Artificial Intelligence Act which classified algorithmic hiring tools as "high-risk" forms of AI. Additionally, some local governments in the United States have introduced compliance laws for such AI hiring tools. New York City passed a local law in 2023 that openly audits AI hiring systems to make sure they are non-discriminatory in their decision-making.
At present, Australia has no legal oversight for algorithmic hiring systems making these systems a legal means of bypassing anti-discrimination laws. Through hiring systems, individuals can be filtered immediately out based on certain "protected attributes" outlined in the Fair Work Act. Such "protected attributes" include race, colour, sex, sexual orientation, religion, disabilities or social origin.
Dr Arian Prabowo, a postdoctoral researcher in machine learning, in UNSW's School of Computer Science and Engineering, says correcting the data-driven biases in AI is complex and there is no quick fix. "[Fixing AI bias] is like a very complicated disease … We haven’t solved it yet, but we’ve made progress in reducing the fatality rate."
Just like a child, AI tools must be taught to avoid bias. He says that the ability for AI to be "taught" certain actions is evident in the Large Language Models we have today. He uses the example of actions ChatGPT is unable to complete. AI models have certain constraints such as the inability to produce swear words or engage in racist dialogue.
Prabowo says that just as developers can "teach" AI not to swear we can also teach it not to be discriminatory in its hiring decision-making. Much of this work revolves around rigorous evaluations to detect and correct biases in data that feed algorithms, ensuring input is diverse.
As AI is increasingly being adopted across multiple fields, Prabowo believes it is important to consider the forms of discrimination that may result from the data it produces; and, that as a creation of human ingenuity, its values should reflect collective commitments to fairness into the future.
I’m a third-year Media (Journalism and Communication) and Arts Student at UNSW. In my spare time I enjoy reading, keeping up with the latest TV shows and doing Physie, a type of dancing I’ve competed in since I was a kid.