AI's biggest problem: Bias

Artificial Intelligence

AI's biggest problem: Bias

Priya Wadhwa
Artificial Intelligence
Published:
AI's biggest problem: Bias
This can pose a massive threat to cybersecurity.

It is logical if you think about it. AI is designed to ape the human mind, but with the aim to not be biased—two mutually exclusive concepts.

If a machine learns to ape a human, it will by default imbibe the bias that the human holds. Because it is in our very nature as human beings to have positive and negative opinions on everything.

Given that AI systems do not follow one person, but a large data set, there can still be patterns of bias.

Aarti Borkar, a vice president at IBM Security, told CNBC that there are three areas in which bias can occur: the program, the data and the people who design those AI systems.

Microsoft’s recent study showed that 75% of companies have adopted or are looking to adopt AI in cybersecurity. Considering the growing use, this bias can become a real threat.

While it is very difficult to completely do away with bias in AI, people in charge of developing and maintaining the programme need to be extra vigilant, even with the data that is being fed into the system. Outside auditors can often be helpful.