This post is part of a continuing DBR on Data series on Executive Order 13800 and updates on its implementation a year after passage.
Strengthening federal information technology (IT) has been one of the priorities of the current administration, as outlined in the May 2017 Executive Order 13800. As summarized in our previous blog, the Director of the American Technology Council (ATC) was tasked, among other things, to coordinate the preparation of a report to the president regarding modernization of federal IT infrastructure. The draft report was made available for public comment in August, and finalized in December 2017. The final report’s implementation clock started on January 1, 2018.
Artificial Intelligence (AI) can be employed in a health care setting for a variety of tasks, from managing electronic health records at a hospital, to market research at a benefits management organization, to optimizing manufacturing operations at a pharmaceutical company. The level of regulatory scrutiny of such systems depends on their intended use and associated risks.
In the U.S., for medical devices using AI, one of the key regulatory bodies is the Food and Drug Administration (FDA), especially its Center for Devices and Radiological Health (CDRH). CDRH has long followed a risk-based approach in its regulatory policies, and has officially recognized ISO Standard 14971 “Application of Risk Management to Medical Devices.” That standard is over 10 years old now, and therefore is currently undergoing revisions – some of which are meant to address challenges posed by AI and other digital tools that are flooding the medical-devices arena.
Data – big or small – has tremendous potential for use (and misuse). For example, using mobile apps to keep track of one’s own physical activity or caloric intake may empower individuals to improve their health. Should other parties (e.g., that app’s developer, physician, employer, insurance company, online friends) be able to access the same information, and if so, under what conditions? As another example, expressing one’s own feelings and preferences on a social media platform may strengthen bonds within a professional community or a family group, expedite academic collaborations, and/or improve an individual’s sense of belonging. However, may those same messages – freely expressed in a public domain – be re-purposed for a study of mental health trends or for marketing strategies; and if so – when/how/by whom, or why/why-not? Questions like these touch on a host of ethical and legal issues that only recently began to be explored in depth, even as new norms of individual behavior, human interactions, and treatment of data are evolving.
On February 13, 2018 FDA approved a software application with clinical-decision support capability, in this case alerting providers of a potential stroke in patients. The system, “Viz.AI Contact,” is developed by a US/Israeli company named Viz.ai, which uses artificial intelligence and machine deep learning for analyzing medical images. Earlier in January, this system also received a CE Mark from the European authorities.
Stroke is caused by an interrupted blood supply to the brain; for example, due to a blood vessel’s rupture. Stroke is among leading causes of mortality and long-term disability in the U.S. and other countries. The Viz.AI Contact system analyzes brain computed tomography (CT) scans, identifies a suspected large vessel blockage, and sends a text notification to the health care specialist.