Back

 Industry News Details

 
Artificial intelligence hates the poor and disenfranchised Posted on : Sep 22 - 2018

The biggest actual threat faced by humans, when it comes to AI, has nothing to do with robots. It’s biased algorithms. And, like almost everything bad, it disproportionately affects the poor and marginalized.

Machine learning algorithms, whether in the form of “AI” or simple shortcuts for sifting through data, are incapable of making rational decisions because they don’t rationalize — they find patterns. That government agencies across the US put them in charge of decisions that profoundly impact the lives of humans, seems incomprehensibly unethical.

When an algorithm manages inventory for a grocery store, for example, machine learning helps humans do things that would, otherwise, be harder. The manager probably can’t keep track of millions of items in his head; the algorithm can. But, when it’s used to take away someone’s freedom or children: We’ve given it too much power.

Two years ago, the bias debate broke wide-open when Pro-Publica published a damning article exposing the apparent bias in the COMPAS algorithms – a system that’s used to sentence accused criminals based on several factors, including race. Basically, the report clearly showed several cases where it was obvious that the big fancy algorithm predicts recidivism rates based on skin tone.

In an age where algorithms are “helping” government employees do their jobs, if you’re not straight, not white, or not living above the poverty line you’re at greater risk of unfair bias.

That’s not to say straight, white, rich people can’t suffer at the hands of bias, but they’re far less likely to lose their freedom, children, or livelihood. The point here is that we’re being told the algorithms are helping. They’re actually making things worse.

Writer Elizabeth Rico believes unfair predictive analysis software may have influenced a social services investigator to take away her children. She wrote about her experience in an article where she describes how social services — whether intentionally or not — preys upon those who can’t afford to avoid the algorithm’s gaze. Her research revealed a system that equates being poor with being bad.

If you’re accused of being an abusive or neglectful parent, and you’ve had the means to treat any addictions or mental health problems you’ve had in a private facility, the algorithm may just skip you. But, if you use government assistance or have a state or county-issued medical card, you’re in the cross-hairs.

And that’s the problem in a nutshell. The best intentions of researchers and scientists are no match for capitalism and partisan politics. Take, for example, that Stanford researcher’s algorithm purported to predict gayness – it doesn’t, but that won’t stop people from thinking it does. View More