Back

 Industry News Details

 
Here's The One Thing That Makes Artificial Intelligence So Creepy For Most People Posted on : Dec 10 - 2018

As many businesses prepare for the coming year, one of the key priorities is determining best use case and strategic implementation of artificial intelligence as it applies to the core competencies of the company. This is a fairly challenging area on a variety of levels. But as this work occurs, one of the most important narratives in the arena is also further coming to light. That is, discussions around this emerging tech space as it directly intersects with ethics, culture, integrity and, quite frankly, the unconscious makeup of just who might be minding the AI store as it is being developed.

The problem is, however, currently there are very few solid directives or templates or litmus tests for such growing concerns related to what is perceived as the "move fast and break things" mindset observed by many in about the AI enterprise sector leaving one to wonder if the few, wise thought-leaders in the space will be truly heard and heeded or simply be drowned out by the promise of power and control through the expanding, amorphous specter that is AI.

Indeed, what does owning responsibility of AI mean, look like and where does that buck actually stop?

Such is the backdrop that made for a particularly intriguing stage conversation during the recent AI Summit in New York City recently. Billed as the world's first and largest conference and exhibit to look at the practical implications of AI for enterprise organizations, the event brought together executives from Google to NBC Universal to Microsoft to IBM and many, many more as they flocked to discuss, demo, deal-make and learn about all things AI.  In its third year, the conference offered a number of C-suite speakers from major companies but one of the most provocative and troubling talks was that of a panel exchange entitled "Responsible AI: Setting the foundations for a fair, ethical, diverse AI proposition in your organization."

Issues around tracking data, public policy, integrity of the actual work being performed, the question of the reliability of such work and its impact on and benefit to society were only some of the main points of discussion that brought together a number of thought-leaders on the troubling matter of responsibility and AI, what area of a company should govern AI ethics, what those ethics should be and the massive tangle of man and machine. And, according to the blank looks after this panel's conclusion, there is no consensus or industry-wide code of ethics in sight.

To further complicate matters, the question of how to attract a diversity of employees to enter the AI space is that which noted as a challenge. And then even once onboard there are further concerns about the health and well-being of such employees given the fact that we are still not exactly sure what impact tracking, drilling into and analyzing patterns, say, identified by machines will actually do to the humans sifting through such data - particularly if that data is negative in some manner - hour after hour, day after day. View More