Can AI become bias?

How can we prevent human bias leak into AI?

These questions are in the same realm as the question on whether AI can become dangerous, and how can we control it?

Sm_getty

As machine learning (ML) practitioner and having experience building products that employ some form of ML, I believe danger of AI can and will happened if not properly built, understood, tested and regulated. We have to be careful with AI and make sure that what we building is beneficial to the human society vs something that can take over it and us remaining without the “big red button”.

As being human (Russian heritage), I also believe in Old Russian saying “Trust, but verify.”

Now, back to the biased aspect. As we know AI employs some form of a deep network (like neural network for example) that takes data as input, and produces some output (similar to human) with a number of deep hidden layers where decision are being learned and made based on some form of training. However, the question is what is in those hidden layers? How can be exposed, understand, regulated and tested them?

Why is it important?

First, I assume we all at this point agree that AI is already around us today (search engines, shopping cards, entertainment services, etc.). Those algorithms are already making the decision, but what’s inside those hidden layers of the neuralnet? What kind of training set as well as reinforcement was done to train the model? Since later it can impact a lot of decision whether it is business, life, etc. For example, in one of the studies Amazon Prime showed that predominantly black zipcode areas were conspicuously denied same-day delivery. Amazon did not reveal the details of how it impacts the outcome of Prime’s same-day delivery and whether “race” was factored into it. A lot of that is derived out of the data features that now in the hidden layers of the network.

Now, imagine a different algorithm that was trained on some data features that could incorporate some demographical knowledge inside the hidden layers of the network. What if that algorithm has to decide who to spare if it’s inside powers self-driving car or who would likely to survive and requires attention in case of the disease?

That’s where accountability becomes very important especially if we going to start combining the AI world of algorithms with humans which now can be found in Augmented Reality or better yet direct integration of our brain via BMI that Neuralink is working on.

By seeing this rapid innovation in space of AI, I strongly believe we need to invest now into accountability of those algorithms via open models (next step beyond open data), tools, and even regulations. Otherwise, we will run into a similar situation as with drones, where now it is too late to prohibit but still not very well understood how to regulate.

Above are my thoughts, but certainly as always I welcome comments, and thoughts within the same realm.

Leave a Reply