Back

 Industry News Details

 
AI Weekly: Defense Department proposes new guidelines for developing AI technologies Posted on : Nov 20 - 2021

This week, the Defense Innovation Unit (DIU), the division of the U.S. Department of Defense (DoD) that awards emerging technology prototype contracts, published a first draft of a whitepaper outlining “responsible … guidelines” that establish processes intended to “avoid unintended consequences” in AI systems. The paper, which includes worksheets for system planning, development, and deployment, is based on DoD ethics principles adopted by the Secretary of Defense and was written in collaboration with researchers at Carnegie Mellon University’s Software Engineering Institute, according to the DIU.

“Unlike most ethics guidelines, [the guidelines] are highly prescriptive and rooted in action,” a DIU spokesperson told VentureBeat via email. “Given DIU’s relationship with private sector companies, the ethics will help shape the behavior of private companies and trickle down the thinking.”

Launched in March 2020, the DIU’s effort comes as corporate defense contracts, particularly those involving AI technologies, have come under increased scrutiny. When news emerged in 2018 that Google had contributed to Project Maven, a military AI project to develop surveillance systems, thousands of employees at the company protested.

For some AI and data analytics companies, like Oculus cofounder Palmer Luckey’s Anduril and Peter Thiel’s Palantir, military contracts have become a top source of revenue. In October, Palantir won most of an $823 million contract to provide data and big analytics software to the U.S. army. And in July, Anduril said that it received a contract worth up to $99 million to supply the U.S. military with drones aimed at countering hostile or unauthorized drones.

Machine learning, computer vision, facial recognition vendors including TrueFace, Clearview AI, TwoSense, and AI.Reverie also have contracts with various U.S. army branches. And in the case of Maven, Microsoft and Amazon among others have taken Google’s place.

AI development guidance

The DIU guidelines recommend that companies start by defining tasks, success metrics, and baselines “appropriately,” identifying stakeholders and conducting harms modeling. They also require that developers address the effects of flawed data, establish plans for system auditing, and “confirm that new data doesn’t degrade system performance,” primarily through “harms assessment[s]” and quality control steps designed to mitigate negative impacts. View more