Back

 Industry News Details

 
Pharma’s AI Future Posted on : Dec 12 - 2018

That was Peter Henstock’s challenge to the audiences last week at the AI World Conference & Expo in Boston. Henstock, the AI and Machine Learning lead at Pfizer, argues that pharma isn’t new to AI. The industry has been doing bioinformatics, business intelligence, cheminformatics, text analytics, and QSAR—quantitative structure–activity relationship—models for years.

Now is the time for pharma to embrace AI and start realizing some of the benefits other industries are enjoying. For example, Forbes suggests that some 35% of Amazon’s revenue is generated by the recommendation engine. And ImageNet says AI can now recognize images better than humans.

The possibilities are vast, but what are the road blocks? Henstock listed several possible culprits: data volume, integration challenges, and the skills landscape. To chart a way forward, he shared his vision of the AI Hierarchy of Needs. Foundational is data at sufficient volume for AI, then comes security needs, collaborations with other groups, robust data science, machine-learning capable, then—at the pinnacle—an AI-driven company.

But even Henstock’s foundation layer—data volume—isn’t a solved problem. “The molecular space is huge, and our datasets are still very small,” said Ed Addison, CEO of Cloud Pharmaceuticals. Cloud is using a mix of AI and prior knowledge for lead design because for some questions, the datasets aren’t yet big enough.

We tend to break problems down into sub-questions, Addison said, look at the data we have available, and then pick the right algorithm based on our data depth and the questions we’d like to answer.

The data-first approach is a wise one, said Robert Bogucki, CTO at deepsense.ai. In an executive roundtable at the event, he was asked how the development process happens in an AI or analytics project. First, he advised, make sure you have the data. “If I’m just thinking about collecting the data now, maybe I should wait on this project until I have the data sources.”

Joe Cheng, associate director of data and statistical sciences at Abbvie, argued for more freely available data to explore. Clinical trial datasets are a huge asset, he said, but the data are mostly “hidden away.” He acknowledged data sharing initiatives, but said most only release data according to a proposal with a specific hypothesis. Users can’t usually just explore the dataset, he lamented. He encouraged pharma to find a way to make the data available to play with and explore.

GlaxoSmithKline is first in line.

ATOM, the Accelerating Therapeutics for Opportunities in Medicine consortium, was started in 2015 by GSK, the US Department of Energy, and the National Cancer Institute. The consortium is investing in building up the data foundation for all of pharma in an effort to jumpstart the AI returns.

ATOM is focused on modulating the biology, said John Baldoni, senior vice president of in silico drug discovery at GSK and founder and governing board co-chair at ATOM. This includes molecular design, human-relevant assays, and ADME-tox, he said. GSK donated two million failed compounds to launch ATOM; the consortium now has 150 model-read datasets. View More