Back

 Industry News Details

 
Hacking and fraud escalate concerns about AI in healthcare Posted on : Jul 14 - 2018

While many in healthcare tout the benefits of AI in clinical settings, artificial intelligence isn't immune to nefarious misuses, such as hacking and clinical fraud.

Artificial intelligence continues to make headlines for its applications in automation, speech recognition, image processing and risk detection. Despite these advances, some researchers have concerns about AI and warn that the malicious use of the technology may have serious implications for healthcare.

Two of the more popular applications of AI in healthcare are data analysis and data mining, where clinical information is processed and the results provide clinical feedback to healthcare professionals. Early results around image analysis to detect cancer or advanced algorithms that match patients to appropriate treatments are examples of AI affecting patients in a positive way. Predicted uses include AI for surgeries, bot-based interactions with patients and advanced data analysis.

However, some in IT have concerns that AI can potentially do more harm than good. The dark side of AI isn't limited to the ill feeling some have toward the technology -- for example, AI's potential for replacing some human workers. Concerns also surround hackers and cybercriminals building or manipulating existing AI systems. Then there are the fears that the technology itself may simply fail.

People make mistakes, but so do robots. AI in healthcare carries a certain amount of risk related to their bugs and potential to make errors. These types of concerns about AI have been validated in the past. A 2015 study, detailed during the 21st Association for Computing Machinery's International Conference on Knowledge Discovery and Data Mining, confirmed that AI apps are not error-free. One app, for example, was used to predict which patients would develop complications from pneumonia. The app worked well in most cases, but it also made a critical mistake: High-risk asthma patients were sent home because the app ignored some of their data elements, putting them at even greater risk.

AI can attack hospitals and AI systems. Concerns about AI in healthcare also center on cyberattacks in which AI can target hospitals and their systems. AI has the capability of performing the complex tasks that can launch a hacking attempt on a network or system. Hospitals can be quickly overwhelmed with more attacks than they may be equipped to handle. These attacks may include AI targeting vulnerable systems or individuals and automated spear phishing attacks to gain access to a system or extract patient information that can be used later.

Vulnerable smart devices can be manipulated. At the DEFCON Hacking Conference in 2016, a group highlighted how a smart thermostat can be hacked, which sent warnings throughout hospitals and other health systems that rely on similar smart devices to control environments. The increased use of smart systems carries significant risks, potentially leaving hospitals, patients and staff physically vulnerable if these AI-based devices are taken over by hackers. View More