New York City introduces new Bill to combat horrid AI Bias


Did you know that most technology is built on biases? It's something that's not often done on purpose, but is often a reflection of the people building the technology. In essence, it often means that tech designs can be built around your average white guy, instead of basically anybody else, and it causes issues with automation and various other things as well.

Thankfully, this is something that people are aware of. For example, the White House has proposed an AI Bill of Rights. Furthermore, New York City has just passed a bill that's designed to combat the issue, specifically when it comes to AI hiring technology.

New York City Is Passing A New Anti-Bias Bill

As we move towards a future that some would describe as "cyberpunk but without any of the cool stuff," a lot of things we're doing, and the advance we're making, need to be checked. It's something that's evident when you look at tech like automated soap dispensers, which don't work for people of color if they've only been trained on white people's hands.

As we move towards trying to introduce AI into more things, not that it's full AI at all really, but closer to just machine learning, we'll encounter yet more issues too. That's because this AI tech is coming into play for recruitment to help people look through CVs to try and help people who are hiring new workers.

The New York City bill reads, "This bill would require that a bias audit be conducted on an automated employment decision tool prior to the use of said tool. The bill would also require that candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion, as well as, be notified about the job qualifications and characteristics that will be used by the automated employment decision tool. Violations of the provisions of the bill would be subject to a civil penalty." The aim here is to stop people using tools that might be bigoted or prejudiced towards a variety of different things.

Read More: AI will be used to ‘hack’ humans in the coming years, says world-renowned philosopher

What causes these issues?

These kinds of bias are in-built, often not on purpose, as it's designed. Let's say you train a machine to sort through CVs in an office. You get everyone who works there to submit their own CV to have the machine learn a bit about what it should be looking for. Now, whatever way your current hiring process leans will be reflected in the machine's view of things.

This means that if you're in a mostly white and mostly male office, then your machine will inevitably look for those kinds of CVs too. This can be due to the way that CVs are structured, let's say everyone in your office went to the same company to do their CV or something. It can also be due to names if the machine's only ever seen 100 variations of John Smith.

Without checking that the AI is devoid of these issues, there's no way of making it fair. In short, it's probably going to spit out a bunch of people who look and sound like those already in the office, rather than those who are actually going to be good for the job. That's bad.

Read More: The Metaverse will make “reality disappear”, claims AR creator

This Article's Topics

Explore new topics and discover content that's right for you!

AINews