Back

 Industry News Details

 
AI Ethics and The New Digital Divide Posted on : Jun 23 - 2018

General sentiment towards AI has been consistently trending negatively over the last couple of years. More and more news pieces are featured on the regular news cycle depicting AI companies as bad  actors. And, what’s more concerning, algorithms themselves are starting to be perceived as evil invisible hands that shape our lives in negative ways.

Despite the hype that followed the renascence of AI with the advent of Deep Learning in the early  2010s, it’s not the singularity that people nowadays fear, but the effects that AI has on their day to day lives as well as on big picture events like Brexit or the 2016 US election.

This negative public perception is already starting to be shaped by politicians into laws (in theory)  designed to protect the general public against these negative effects on society. Starting mid 2018, the industry now has to deal with the draconian provisions included in the GDPR that affect AI specifically.

On the positive side, AI practitioners are starting to realize that we need to start taking ethical positions regarding the projects we get involved with. Otherwise, we risk public perception about our industry skewing further and further negatively.

The entire AI field needs to engage in serious conversations around the ethics of the products we create or we’ll face the consequences. Another AI Winter is very possible, but this time it wouldn’t be triggered by our over-promises, but by society’s perception of us and our creations.

The digital divide is also, in my opinion, exacerbating this problem. I’m not referring to the divide between those who have equal access to computing devices and information and those who do not, but  the divide between those who can create and understand AI applications and those who can’t.

In the past 5 years the bar to accessing powerful AI architectures has been lowered dramatically by cool  projects like Keras. However, what good can access to these AI tools do if only a few corporations  can actually feed large quantities of data to them, which is when these AI architectures really shine.

Corporations like Google and Facebook get basically all the data. Smaller companies have to make do  with the small data sets that we can create in house and the few data sets that allow commercial use  that are available out there, or pay thousands of dollars in the very few data markets available.

I know that the following opinion will not be a popular one, but I firmly believe that academia is doing a disservice to society by not fully open-sourcing the models and data sets they create. Many publicly  funded projects release data sets intended for research purposes only, despite the fact that their funds  came from regular tax payers as well as taxes paid by businesses. View More