Industry News Details

Responsible AI Moves Into Focus at Microsoft's Data Science and Law Forum Posted on : Mar 12 - 2020

Last week, Microsoft gathered experts from academia, civil society, policy making and more to discuss one of the most important topics in tech at the moment: responsible AI (RAI).

Microsoft’s Data Science and Law Forum in Brussels was the setting for the discussion, which focused on rules for effective governance of AI.

Whilst AI governance and regulation may not be everyone’s cup of tea, the event covered an array of subjects where this has become a red hot issue, such as the militarization of AI, liability rules in AI systems, facial recognition technology and the future of quantum computing and more. The event also gave Microsoft an opportunity to showcase its strategy around this important area.

A few highlights are worth sharing, so let’s dig a bit deeper into what Microsoft is doing in RAI, why it’s important and what it means for the market moving forward.

Responsible AI Is Now a Priority

Responsible AI is a combination of principles, practices and tools that enable businesses to deploy AI technologies in their organizations in an ethical, transparent, secure and accountable manner.

The subject has been getting a lot of attention for several reasons.

First, we are seeing more high-profile examples of biased algorithms, autonomous vehicle accidents and privacy-violating facial recognition systems which are increasing the public's awareness of the dangerous unintended consequences of AI.

Second, enterprises are now beginning to shift early AI projects out of the labs and are considering the real-world risks and responsibilities they will have when deploying AI in their operational processes.

And third, as decision makers consider introducing data and AI solutions in critical areas such as finance, security, transportation, healthcare and more, concerns are mounting over its ethical use, the potential for bias in data and a lack of interpretability in the technology, as well as the prospect of malicious activity such as adversarial attacks.

For these reasons, the governance of machine learning (ML) models has now become a main priority for investment with enterprises. According to my firm, CCS Insight's 2019 survey of senior IT decision-makers, for example, the level of transparency of how systems work and are trained, and the ability of AI systems to ensure data security and privacy are now the two most important requirements when investing in AI and machine learning technology, cited by almost 50% of respondents. View More