Debiazing vector settlements: first step towards Fair AI

Author: Sanket Rajaram

Originally published in the direction of artificial intelligence.

Imagine building a machine learning model that works with excellent accuracy, only to discover that it subtly favors some groups towards others.

You check your data, clean the functions, and even tune your hyperparameters – but bias remain. This is because the problem can be deeper – buried just in your deposits.

In this article we will go through one of the simplest and most effective techniques for detecting and removing bias at the vector level. If you are someone who works with embedding-setting words, sentence vectors, representations of tabular entities-this is your invitation to machine learning taking into account honesty.

Photo of Steve Leisher on Unsplash

They are built in the numeric spine of the ML pipeline. They capture semantics, similarity and structure. But they also grab something more dangerous: bias.

If you use pre -marked prisoners or train your own on historical data, there are chances that your vectors have absorbed patterns reflecting stereotypes:

“Doctor” can favor “he” than “she”. “Leader” can drift towards “White” in racially looted bodies.

These patterns are not simply uncomfortable – they are harmful. They quietly change the worldview of your model.

Although we wrote the code and we plan vectors, it is behind solid science. Here is a real methodology that powers … Read the full blog for free on the medium.

Published via AI

LEAVE A REPLY

Please enter your comment!
Please enter your name here