Back

 Industry News Details

 
Artificial intelligence (AI) and privacy: 3 key security practices Posted on : Feb 23 - 2021

Before you implement an AI strategy, consider these techniques to help protect privacy and ensure compliance with security standards

If you are involved in next-generation digital product engineering, experimenting with artificial intelligence (AI) will help you imagine new business models, revenue streams, and experiences.

But you should be wary of wild headlines about cutting-edge AI breakthroughs. For every AlphaFold that solves a 50-year-old problem about protein folding, there are dozens of less glitzy but perhaps more impactful business AI advances that are helping to make it more responsible and privacy-conscious.

As algorithms imbibe increasingly huge data sets both in training and deployment, data privacy as it relates to AI/machiene learning (ML) will only grow in importance, especially with new regulations expanding upon GDPR, CCPA, HIPAA, etc. In fact, the FDA recently issued a new action plan for regulating AI in medical devices. Expanding regulatory frameworks are partially why data privacy is one of the most important issue of this decade.

As your organization plans for AI investments in the future, the following three AI techniques will ensure you stay compliant and secure well into the future.

1. Federated learning

Federated learning is an increasingly important ML training technique that solves one of ML’s biggest data privacy issues, especially in fields with sensitive user data, such as healthcare. The traditional wisdom of the last decade was to unsilo data wherever possible. However, the resulting data aggregation necessary to train and deploy ML algorithms has created serious privacy and security problems, especially when data is being shared between organizations. View More