Celestina Bianco, Director of Quality and Regulatory Affairs, Clinical Software, Werfen
A medical device must be effective and most importantly not cause harm. That is why regulators request that medical devices are fully validated before put into use.
Recently, we have seen the evolution of software applications that learn whilst operating, the so called adaptative devices— artificial intelligence (AI)-based machines.
These machines are highly promising both from a clinical and a business perspective, as well as being perceived as a risk by both stakeholders. Physician concerns relate to the quality of data and algorithms. Manufacturer concerns relate to obtaining approval from the authorities once the product is ready. An adaptative machine cannot be fully validated, and there is not an alternative regulatory framework available to manufacturers.
On the other hand, regulators realize the potential of the new technologies and the benefits for public health, and are working on different proposals, at the national and international level. FDA, EU, China, Japan, Korea and national authorities are studying new ways to assure and certify device safety, taking into account work of international organizations such as IMDRF.
The approach is, as in many other fields, based on risk reduction. The risks with a device must be reduced as much as possible and/or to a level where the benefits outweigh the risks to the patients. This is traditionally assured by a thorough validation and clinical evaluation. The challenge for AI devices is to assure safety is maintained when the device changes according to the experience and learning that it gets during use. The FDA is studying a model that lowers the scrutiny on the products once the maturity and quality level of the manufacturer is certified. In the following section, I will describe the concept of the model, without entering into any detail that might change before going live.
The maturity of the processes of the manufacturer, its capability to build devices safe (and secure) by design, is certified initially and re-evaluated constantly through analysis of performance data and inspections.
A certified manufacturer will still submit its products for approval, for the Intended Use and capability that it will have when first put into use. As the product has been built through learning data, the manufacturer shall demonstrate that the quality of the data and data acquisition mechanism is adequate. In addition to the specification of the product and the design process, the manufacturer shall file specifications of the expected changes of the algorithm depending on the experience and data acquired, and the plan to control and validate it.
Post-production metrics transparently shared with the authority and the users shall demonstrate that the plan for change control is adequate to maintain safety and effectiveness of the adaptative device.
Updated specifications and plans shall be filed when the experience and learning of the device affects the intended use or users, which typically increase over time.
"Many of the AI companies are small and composed of engineers with a mentality and experience which is far from regulation and interpretations. Have they the skills and can they afford to define the process expected"
Intrinsic to the nature of an adaptative device is its connectivity to multiple devices and sensors that constantly provide input. Besides the need for a robust mechanism to validate the data, there is an extreme need for cyber vulnerability control. Cyber security requirements will strictly apply to these devices.
Is this model compatible with EU regulation? MDR requires “devices that incorporate electronic programmable systems, including software, or software that are devices in themselves, shall be designed to ensure repeatability, reliability and performance with their intended use ...”. Demonstration of compliance is made in the EU through the application of harmonized standards. The specific standard for the software life cycle is ISO/IEC 62304, which does not specifically exclude AI, but it needs interpretations.
To make sure that we can interpret the word “repeatability” not as repeatably provide the same results with repetition of use, but rather repeatably appropriate results for the inputs provided, I would consider a mapping example that a manufacturer could use as reference:
Adopt ISO/IEC 62304 – Add Data Life Cycle & Control
• Customize with control of dynamic changes
• Use in conjunction with Security Standards
• Use in conjunction with ISO 82304 to cover the whole life cycle
• Define clear criteria to distinguish maintenance (learning within the same intended use) from new cycle (readiness for new intended use/users)
The model described, or similar ones, as a potential new way of ensuring adequate regulation, might, in my opinion, allow answering YES to the initial question “Can your medical device learn whilst working,” meaning that a manufacturer would be allowed to put into market, devices that learn and adapt to the data and situations, providing outputs not validated according to the traditional regulatory approach and definition, but leaving open questions:
• Many of the AI companies are small and composed of engineers with a mentality and experience which is far from regulation and interpretations. Have they the skills and can they afford to define the process expected?
• Software is generally international. Can small companies be compliant with different requirements from different regulators? Would IMDRF be a solution for them?
• How will Clinical Evaluation be performed and repeated? What value will it provide?
• Who will define and control the Intended Use, the actual use and what are the legal and ethical responsibilities?