CADTH Horizon Scan

An Overview of Continuous Learning Artificial Intelligence-Enabled Medical Devices

Emerging Health Technologies

Authors: Andrea Smith, Melissa Severn

Abbreviations

AI

artificial intelligence

AI/ML

artificial intelligence/machine learning

ATPs

advanced therapeutic products

HTA

health technology assessment

GMLP

Good Machine Learning Practice

ML

machine learning

MLMDs

machine learning-enabled medical devices

ICMRA

International Coalition of Medicines Regulatory Authorities

IMDRF

International Medical Device Regulators Forum

SaMD

Software as a Medical Device

SiMD

Software in a Medical Device

Key Messages

Purpose

This report provides an overview of continuous learning artificial intelligence (AI) as a medical device or as part of a medical device, including what they are, how they might be used, their benefits, and the potential challenges and opportunities they pose for regulation, assessment, and evaluation. This report does not provide a systematic review or critical appraisal of the clinical or economic evidence, of patient and stakeholder perspectives, or of ethical, legal, and social considerations of these technologies. As such, the information provided is not exhaustive or comprehensive of the considerations, issues, or implications posed by the use or adoption of continuous learning AI.

Methods

A limited literature search was conducted by an information specialist on key resources including MEDLINE, Embase, the Cochrane Database of Systematic Reviews, the international HTA database, the websites of Canadian and major international health technology agencies, as well as a focused internet search. The search strategy comprised both controlled vocabulary, such as the National Library of Medicine’s MeSH (Medical Subject Headings), and keywords. The main search concepts were continuous learning and artificial intelligence. No filters were applied to limit the retrieval by study type. Where possible, retrieval was limited to the human population. The search was also limited to English language documents published between January 1, 2016 and October 25, 2021.

Additionally, the report author searched the internet using Google and Google Scholar using search concepts of continuous learning and artificial intelligence to locate additional documents related to regulation and HTA. Information on the ethical, legal, and social dimensions of regulating and assessing continuous learning AI was gathered from a limited electronic search for continuous learning AI and related terms (i.e., unlocked AI, adaptive AI) for health care. Information was synthesized using methods of content analysis. One author screened the literature search results and reviewed the full text of all potentially relevant studies. Studies and grey literature were considered for inclusion if the intervention was explicitly described as continuous learning artificial intelligence or machine learning and relevant to its regulation, assessment, or evaluation.

Peer Review

A draft version of this report was reviewed by 1 content expert.

Background

AI is a broad term used to describe technologies capable of behaviour that resembles human-like intellectual abilities, such as autonomous or semi-autonomous problem-solving, reasoning, recognition, and language use.1 Recent advances in computing power, connectivity, and the rise of big data have led to the rapid development and diffusion of AI systems in various sectors. AI now touches many aspects of lives of people living in Canada, when searching the internet using Google, receiving recommendations from streaming services, or paying bills using chatbots.2 To date, most real-life applications of AI tend to be able to complete a specific predetermined task (e.g., chatbots, search algorithms) as opposed to the types that capture the imagination (e.g., human-like robots). AI can exist in the form of software (e.g., algorithms to analyze images) or be embedded in hardware devices (e.g., robots for surgery).

A wide range of types of and uses for AI are being developed for or already implemented in health care. These include AI used in the design and planning of health care services; clinical care including diagnosis and imaging, disease prevention and population health; and the development of new diagnostic tools and therapeutics.3,4 AI is expected to contribute to the goal of improving the overall health of populations – whether through improved self-management, the delivery of more efficient care, improved diagnosis or clinical pathways, or the development of new drugs.3,4 At the same time, there is a growing awareness of the potential harms and challenges of AI in health, including the ability to perpetuate systemic bias, challenges in assessing safety, concerns around data and privacy, issues with transparency and accountability, and challenges with governance and implementation into clinical practice.5

One of the benefits of AI is its ability to adapt and learn and improve its ability to accurately solve problems. This report aims to provide information on emerging approaches to regulate, assess, and evaluate a form of AI that continuously learns from or adapts to new input data (i.e., continuous learning AI). While no continuous learning AI for clinical purposes are currently authorized in the Canadian market, regulatory and health technology assessment processes and methods are being developed in anticipation of its arrival.

The Technologies

AI does not refer to a single technology, but rather is an umbrella term that covers various technologies with algorithmic components that use approaches such as machine learning, deep learning, neural networks, and natural language processing to mimic human intelligence.1 Within health care, the most common type of AI currently being advanced are those that are based on machine learning.6 Machine learning (ML) is a subfield of AI that trains algorithms to complete a task or solve a problem by learning without being explicitly programmed.7 Using complex statistical methods, these algorithms recognize patterns in data, learn from these patterns, and subsequently make predictions based on these data. Deep learning, which is a subfield of ML that uses multiple layers of data processing, is also being used to develop continuous learning AI and uses vast amounts of data to make predictions or decisions.7

There are a number of ways an AI application can be trained to find patterns and make predictions. Supervised and unsupervised learning are the 2 main ways of training and the difference between them is whether the training data are labelled (i.e., algorithms use defined input and output data to improve the accuracy of predictions or classifications made) or unlabelled (i.e., algorithms discover patterns in input data on their own). Supervised learning uses labelled data, which typically requires human effort to assign the label (e.g., radiologist to read a radiograph and make a diagnosis) and then the data are fed to the AI application for learning. As a result, supervised learning is often described as time consuming and resource intensive.8 With unsupervised learning, the AI application learns from the statistical properties of the input data, grouping the data into clusters with similar statistical properties.7 Training can be a mix of types (i.e., semi-supervised), and it is worth noting that the terms supervised and unsupervised do not refer to the presence or absence of human oversight.7

Continuous learning AI harnesses the ability of algorithms to learn and improve the accuracy of their predictions or classifications after exposure to new data. Continuous learning means that each new data point that an AI application receives can be used by the AI application to update the model (e.g., parameters or weights) and improve the prediction or classification.7 AI that can independently change or learn is sometimes called adaptive AI, continual or continuous learning AI, or unlocked AI. AI that is unable to continuously learn is described as being fixed or locked.7 Other types of learning exist: for example, batch learning involves discrete updates based on defined sets of data at distinct time points.7 In this report, the term continuous learning AI is used to refer to AI systems that are developed to change based on new data it receives continuously over time.

The ability to adapt offers a number of potential benefits, one of the key being that the AI is able to improve its predictions and classifications over time, but it also introduces a number of risks and challenges. Such risks include lack of generalizability to the intended population, where the AI does not perform as expected when learning from real-world data, or catastrophic forgetting where the new data interferes with the model and can lead to an overwrite of the model’s previous knowledge.8 These potentially unpredictable issues with the real-world performance of continuous learning AI mean that they need regular and ongoing evaluation to ensure they are performing as expected, as well as mechanisms to address medical errors.9 Additionally, continuous learning AI applications by their very nature are changing, which raises questions of when and how to evaluate their performance in terms of effectiveness, safety, and unintended consequences. Moreover, as these algorithms are changing, it is possible that neither the AI nor its developers can explain how the AI is making decisions.9 This exacerbates challenges relating to transparency, explainability, and predictability, which commonly present with AI. Because AI is complicated, and because continuous learning AI constantly changes, these models sometimes lack algorithmic transparency, meaning that it is not possible for humans to easily understand or explain how the model made a prediction or decision. Explainability here refers to the ability for a human to understand how a model made a prediction or came to a particular outcome.10

At the same time, fixed or locked AI also poses risks. Locked AI can become dated, in that the training data may no longer be representative of real-world data and can experience model drift where the AI performance degrades over time. Locked AI will also need periodic evaluation and might lead to a situation of needing to distribute updates or make withdrawals across the health care system.11

A final consideration is whether the continuous learning AI is a standalone software application that is intended to perform a medical task without being part of a hardware medical device (i.e., software as a medical device or SaMD),12 or part of a hardware medical device (i.e., software in a medical device or SiMD).

Clinical Applications of Continuous Learning AI

At the time of writing, Canada has not yet granted regulatory approval to a continuous learning AI product as a medical device or as part of a medical device. However, as regulators are developing and implementing processes for continuous learning AI, it is likely such products will be introduced to the market once these processes are in place. A CADTH Horizon Scan (2018) described the breadth of clinical areas in which AI was in development.13 As the majority of locked AI or continuous learning AI that are in use or approved for the market are in the areas of diagnosis and imaging,3 it is likely that these areas will be the first to see approved continuous learning AI. Some have proposed that diagnostic testing would be an area where continuous learning models could be implemented safely.8 For example, the AI could make a prediction, and then the clinicians verify the diagnosis, labelling the data and providing it back to the AI to self-adjust.8 However, as described previously, the use of labelled data are time consuming and resource intensive.8 The nature of potential changes to regulatory processes and any issues associated with data access may create incentives and barriers for the development of particular forms of continuous learning AI for the market.

Regulatory Status

Numerous regulatory challenges arise with continuous learning AI stemming from the changes made to and by the algorithm and the potential for alterations in post-deployment performance. Jurisdictions are determining how to best facilitate market access to continuous learning AI-enabled medical devices in ways that maximizes patient benefit but also protect against the risks of harm. This section reviews the current activities being undertaken within Canada and other countries that aim to develop a regulatory response to mitigate risks associated with continuous learning AI.

A recent Horizon Scan of members’ activities around regulating continuous learning AI (2021) conducted by the International Coalition of Medicines Regulatory Authorities (ICMRA) did not describe any regulatory activities specific to continuous learning AI.14 In a 2019 survey of international members of the Global Digital Health Partnership, 44% reported having a national or regional policy framework for the development and/or deployment of AI.15 Regulatory framework and process development for AI in general was being undertaken by 75% of international members of the Global Digital Health Partnership.15 Eighty-one percent reported that continuous learning AI were not being regulated for clinical use.15

Canada

In April 2022, Health Canada announced the intentions to formally add adaptive machine learning-enabled medical devices (MLMDs) to Schedule G of the Food and Drugs Act, thereby allowing these devices to be regulated as Advanced Therapeutic Products.16 Advanced therapeutic products (ATPs) are drugs or devices that current regulations were not designed to handle because they are unique, complex, and distinct. The framework for Advanced Therapeutic Products is based on provisions added in the Food and Drugs Act in June 2019 and allows for Health Canada to authorize ATPs using a flexible and risk-based approach. The approach allows Health Canada to tailor regulatory requirements for a specific product type. It addresses the products’ unique characteristics while maintaining standards for patient safety, product quality, and efficacy.17 According to Health Canada, an ATP Guidance document for adaptive MLMDs will be published for stakeholder comment in spring 2022, and stakeholders will be engaged as it develops the tailored requirements.18

In October 2021, Health Canada jointly released with the US FDA and the UK’s Medicines and Health care products Regulatory Agency (MHRA) 10 principles for Good Machine Learning Practice (GMLP) for Medical Device Development.19 The guiding principles “identify areas where the International Medical Device Regulators Forum (IMDRF), international standards organizations, and other collaborative bodies could work to advance GMLP.”19 These Guiding Principles for GMLP are:

Designed to address the risks and challenges of AI, the principles can be interpreted as applying to AI in general but also to continuously learning AI. Principle 10 specifically addresses the need for ongoing monitoring and evaluation due to the potential for negative changes in performance in the real world. It states: “[a]dditionally, when models are periodically or continually trained after deployment, there are appropriate controls in place to manage risks of overfitting, unintended bias, or degradation of the model (for example, dataset drift) that may impact the safety and performance of the model as it is used by the Human-AI team.”19

United States

In addition to co-releasing the Guiding Principles for GMLP,20 the US has proposed and is revising a framework for regulating continuous learning AI and ML. The US FDA published a discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback” in April, 2019 that proposed a potential approach to regulate AI/ML-SaMD across the total product life cycle, including pre-market review and post-market monitoring.21 A central part of the FDA’s proposed approach that is relevant for continuous learning AI is the Predetermined Change Control Plan. This plan would include the types of anticipated modifications — referred to as the “SaMD Pre-Specifications” — and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients through a “Algorithm Change Protocol.” In this approach, FDA expressed an expectation for transparency and real-world performance monitoring by manufacturers that could enable FDA and manufacturers to evaluate and monitor a software product from its pre-market development through post-market performance.22

Based on feedback received, the FDA released the Artificial Intelligence/Machine Learning (AI/ML)-Based SaMD Action Plan in January 2021.22 This report outlines the activities the FDA is undertaking to continue to move the AI/ML-SaMD regulatory framework forward, including further developing the Predetermined Change Control Plan and providing guidance on the monitoring of real-world performance for AI/ML.20

Further to these activities, the US FDA has been piloting a Software Precertification (Pre-Cert) Pilot Program, part of its FDA's Digital Health Innovation Action Plan.23 This pilot explores the potential to pre-certify manufacturers who “who have demonstrated a robust culture of quality and organizational excellence, and who are committed to monitoring real-world performance of their products once they reach the US market”23 in an effort to build a streamlined and efficient regulatory approach to software-based medical devices.23 While not specific to continuous learning AI, it is an approach that looks to establish trust in manufacturer and its products while having continued oversight through post-market evaluation.

Japan

Japan’s Pharmaceuticals and Medical Devices Agency (PMDA) has undertaken a number of activities related to the regulation of AI.24 These include a Post-Approval Change Management Protocol (PACMP) for Medical Devices for AI-enabled medical devices. This process is designed to “enable the continuous improvement of performance of SaMD using AI”25 and has the manufacturer submit a proposal for an improvement process that includes post-market evaluation.25

Other International Jurisdictions

Many international jurisdictions are developing AI strategies that include potential regulation and legislation that may address the regulation of AI. The European Union, for example, has proposed new AI regulation26 that takes a life cycle approach and would include a quality management system that requires manufacturers to specify their strategy for managing modifications, testing, and validation. Further, similar to the FDA’s proposed approach, it introduces the need for postmarked monitoring.27

In the UK, new regulation for AI as a medical device is also in development. Along with the co-release of the Guiding Principles for GMLP in 2021,28 the MHRA has outlined an approach to developing regulation that also takes a life cycle approach and includes pre-market approvals and post-market evaluation.29

International Harmonization

Internationally, there has been movement toward harmonization of regulatory process for medical devices across countries. Harmonization is likely to influence any new, or amendments of existing, policy and regulatory approaches for continuous learning AI.15 As a foundation for international alignment, work is being undertaken to develop consistent terminology related to AI. For instance, the IMDRF, of which Health Canada is a member, is developing definitions of key terms and concepts related to AI.7 Similarly, the International Organization for Standardization (ISO) is developing a series of standards for AI.30

Health Technology Assessment and Evaluation

Within the field of HTA, there is a lack of consensus or standard approach to the assessment of AI in general and continuous learning AI specifically. Some HTA agencies and scholars have identified the need to adapt HTA processes and methods for the assessment of AI, particularly for continuous learning AI.11,31-34

AI has been proposed as exceptional and as requiring a unique approach to HTA, due to features of the technology and its potential impact on health care systems and society,11 which are particularly relevant for continuous learning AI.11,34,35 The technological features of continuous learning AI that challenge current approaches to HTA include developing capacity and methods to address the potential absence of algorithmic transparency with an ongoing learning and changing algorithm, how to evaluate data considerations including data privacy and security, as well the appropriateness of the data used to train and update the algorithm.11 Real-world evidence is important to the ongoing evaluation of continuous learning AI which changes post-deployment using real-world data.11 Stakeholder dialogues are described as a critical component, and there may be potential for HTA agencies to act as conveners of stakeholder dialogue throughout the technology’s life cycle.34

While there are high expectations about the promise of AI in terms of contributing value to health systems and that it will rapidly diffuse across clinical areas and health care systems, some suggest that HTA agencies have an important role to play in assessing the true impact of AI technologies and containing the hype associated with them.11 The first assessments will set the stage for how these technologies are assessed, including how HTA agencies address the likely limited data on clinical effectiveness and economic impact that are typically required in HTA.11 Moreover, the final recommendations about adoption from HTA agencies are likely to be a signal for industry in terms of which technologies are mostly likely to be seen as adding value to health care systems. The absence of clearly defined regulatory processes (i.e., appropriate for and specific to each continuous learning AI-enabled device) may create challenges for HTA agencies.11,31

Modified HTA frameworks to address issues related to AI have been proposed. For example, the National Institute for Health and Care Excellence (NICE) published its Evidence Standards Framework for Digital Health Technologies in April of 2021,36 in which, it explicitly stated, was not designed for use with continuous learning AI (what they term “adaptive algorithms”).36 An updated draft version of the framework was posted for public comment in April of 2022, and includes Standard 16, which is specific to digital health technologies (including continuous learning AI) for which performance is expected to change over time.37 It specifies that the company and the evaluator should have an agreement around the post-deployment reporting of changes in performance. This agreement should include how frequently, what data and performance measures, and across which populations will be used to assess potential changes in performance.37 A final updated version of the framework is to be published June 2022.

France’s HAS released a specific guidance for submissions for medical devices as or with AI seeking reimbursement.38 It includes specific guidance for continuous learning AI on justifying the representativeness of the learning data and specifying the update frequency.38 Finland released the Digi-HTA Framework for digital health care services, including AI, but did not provide specific guidance on the assessment of continuous learning AI.33 In 2021, South Korea released an additional Assessment Guideline for National Health Insurance Coverage Eligibility of Innovative Medical technology, however it only covers AI used in imaging.39 In the scholarly literature, Alami et al. (2020) have taken the EUnetHTA core model and expanded it to illustrate how the context in which AI is to be implemented can be drawn into the assessment across domains.34 Further, the potential for AI to act as levers for health systems change34 has been articulated as highlighting the need for real-world evaluation studies.

The need for real-world evaluation studies for AI has motivated the development of new methods and frameworks for its conduct. The Translational Evaluation of Health care AI (TEHAI) Framework draws upon translational research concepts and proposes 3 domains for evaluation: capability, adoption and utility, however it does not explicitly address consideration of continuous learning AI.40 Likewise, the Evaluate Commercial AI Solutions in Radiology (ECLAIR) framework designed to assist stakeholder decide on whether to purchase AI solution for radiology is focused on locked AI.41 Evaluation principles for autonomous AI, which are those that provide a diagnosis or recommendation without physician oversight, were developed from bioethical principles by Abramoff and colleagues, but do not address unique issues with continuous learning AI.42 While existing evaluation frameworks and principles for AI may be helpful starting points, it is likely that work that specifically addresses the challenges of continuous learning AI remain needed.

Ethical, Legal, and Social Dimensions

The ethical, legal, and social dimensions of AI in health and health care is a flourishing area of inquiry. This section provides an overview of key ethical, legal, and social issues related to the regulation, assessment, and evaluation of continuous learning AI.5,43

At a high level, continuous learning AI appears to share much of the ethical, legal, and social landscape related to digital health and AI more broadly (for example, considerations related to privacy and security of data, challenges to transparency and explainability, impact on clinical decision-making and clinical practice, and ability to inflate or reproduce systemic biases),43 while at the same time potentially magnifying some due to the continuous learning aspect. Some suggest that continuous learning AI requires a distinctive approach to ethical considerations.5,35,44 Similar to what is occurring in the regulatory sphere, an emerging approach is to engage in early and sustained consideration of ethical issues across the development of a continuous learning AI. Several scholars have pointed to the need to consider ethical and social issues relating to ML, including health disparities, across the pipeline from development to post-implementation.44,45 This approach means that ethical and social issues can be raised at each stage of product development: problem selection, data collection, outcome selection, algorithm development, and post-deployment,45 and likely include the processes of regulation and HTA.

The need for appropriate regulation of new AI technologies, including continuous learning AI, has itself been identified as an ethical issue that it is not just a practical matter but an ethical one, that jurisdictions develop polices and process for regulations that keep pace with technological developments.5,43,46 Moreover, deciding who is responsible for when and how continuous learning AI is evaluated is itself a normative question.35

Challenges with algorithmic transparency and explainability are exacerbated in the case of continuous learning AI, as the algorithm learns and changes over time in ways that may not be foreseen or planned by the developer. Similarly, while accountability and decision-making may shift somewhat from the clinician to the technology in all cases of AI, it is even more salient with continuous learning AI where the performance of the clinical tool can vary independently over time.43 In the case of continuous learning AI, responsibility and oversight may also shift from the manufacturer to the health system user, signifying the importance of local or regional governance of the AI (which learns using local data).43 This shift in accountability and responsibility posed by continuous learning AI also poses questions for assigning liability and creates complicated legal terrain, which remains underexplored.43,44,47

The potential for model drift and unintended consequences with continuous learning AI highlights the importance of ongoing assessment and further emphasizes the importance of addressing where responsibility for errors lies.45 The ability of AI to perpetuate or replicate existing societal biases embedded in the datasets used to train AI (known as algorithmic bias)10 may be further exacerbated by continuous learning. This could happen if changes made to continuous learning AI are based on health care data that themselves reflect and contain bias due to systematic disparities or discrimination in health care access and delivery. This raises challenges for HTA in considering how to understand and assess the potential of these technologies to perpetuate harm and discrimination.

Final Remarks

The regulation, assessment, and evaluation of continuous learning AI is a rapidly changing area. This area of technology poses new challenges related to ensuring effectiveness and safety and assessing value and potential impact on health care systems. Continuous learning AI may have a cascading effect on a national and international level, including reshaping and shifting the role of regulators and HTA agencies. The literature on regulation, HTA and ethical, social, and legal issues relating to continuous learning AI is still growing and as a result, changes and developments in this area are likely to continue.

References

1.IBM Cloud Education. What is Artificial Intelligence. 2020: https://www.ibm.com/cloud/learn/what-is-artificial-intelligence. Accessed 9 March 2022.

2.The 10 Best Examples Of How AI Is Already Used In Our Everyday Life. 2019: https://www.forbes.com/sites/bernardmarr/2019/12/16/the-10-best-examples-of-how-ai-is-already-used-in-our-everyday-life/?sh=2c7c4ad21171} Accessed 8 March 2022.

3.Building a Learning Health System for Canadians: Report of the Artificial Intelligence for Health Task Force. 2020: https://cifar.ca/wp-content/uploads/2020/11/AI4Health-report-ENG-10-F.pdf. Accessed 8 March 2022.

4.Canada Health Infoway. Module 1: An Introduction to AI in Health Care. 2021: https://www.infoway-inforoute.ca/en/component/edocman/3983-module-1-an-introduction-to-ai-in-health-care/view-document%7D. Accessed 8 March 2022.

5.Future Advocacy. Ethical, social, and political challenges of artificial intelligence in health. 2018: https://wellcome.org/sites/default/files/ai-in-health-ethical-social-political-challenges.pdf. Accessed 8 March 2022.

6.Health Canada. Regulatory challenges of AI products A pre-market perspective. 2019: https://www.cadth.ca/sites/default/files/symp-2019/presentations/april15-2019/A3-presentation-tdumouchel.pdf. Accessed 8 March 2022.

7.International Medical Device Regulators Forum. Machine Learning-enabled Medical Devices—A subset of Artificial Intelligence-enabled Medical Devices: Key Terms and Definitions. 2021: https://www.imdrf.org/sites/default/files/2021-10/Machine%20Learning-enabled%20Medical%20Devices%20-%20A%20subset%20of%20Artificial%20Intelligence-enabled%20Medical%20Devices%20-%20Key%20Terms%20and%20Definitions.pdf. Accessed 2022 May 13.

8.Lee CS, Lee AY. Clinical applications of continual learning machine learning. Lancet Digit Health. 2020;2(6):e279-e281. PubMed

9.Canadian Institute of Health Research. Best Brains Exchange: Introduction of Artificial Intelligence and Machine Learning in Medical Devices. 2019: https://cihr-irsc.gc.ca/e/51459.html. Accessed 8 March 2022.

10.Canada Health Infoway. Module 2 Understanding Key Risks of AI in Health Care. 2022: https://www.infoway-inforoute.ca/en/component/edocman/3984-module-2-understanding-key-risks-of-ai-in-health-care/view-document. Accessed 10 February 2022.

11.Bélisle-Pipon J-C, Couture V, Roy M-C, Ganache I, Goetghebeur M, Cohen IG. What Makes Artificial Intelligence Exceptional in Health Technology Assessment? Front Artif Intell. 2021;4:736697-736697. PubMed

12.Health Canada. Guidance Document: Software as a Medical Device (SaMD): Definition and Classification. 2019: https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/application-information/guidance-documents/software-medical-device-guidance-document.html. Accessed 8 March 2022.

13.Mason J, Morrison A, Visintini S. An Overview of Clinical Applications of Artificial Intelligence. Ottawa: CADTH; 2018: https://www.cadth.ca/sites/default/files/pdf/eh0070_overview_clinical_applications_of_AI.pdf. Accessed 8 March 2022.

14.International Coalition of Medicines Regulatory Authorities (ICMRA). Horizon Scanning Assessment Report – Artificial Intelligence. 2021: https://www.icmra.info/drupal/sites/default/files/2021-08/horizon_scanning_report_artificial_intelligence.pdf. Accessed 8 March 2022.

15.NHS and Global Digital Health Partnership. AI for healthcare: Creating an international approach together. 2020: https://www.nhsx.nhs.uk/media/documents/GDHP_Creating_an_international_approach_together.pdf. Accessed 8 March 2022.

16.Health Canada. Forward Regulatory Plan 2022-2024: Advanced Therapeutic Products Pathway for Adaptive Machine Learning-enabled Medical Devices. 2022: https://www.canada.ca/en/health-canada/corporate/about-health-canada/legislation-guidelines/acts-regulations/forward-regulatory-plan/plan/advanced-therapeutic-products-pathway-adaptive-machine-learning-enabled-medical-devices.html. Accessed 2022 April 26.

17.Health Canada. Regulatory innovation for health products: Enabling advanced therapeutic products. 2022: https://www.canada.ca/en/health-canada/corporate/about-health-canada/activities-responsibilities/strategies-initiatives/health-products-food-regulatory-modernization/advanced-therapeutic-products.html. Accessed 2022 April 22.

18.Health Canada. Regulating advanced therapeutic products. 2022: https://www.canada.ca/en/health-canada/services/drug-health-product-review-approval/regulating-advanced-therapeutic-products.html#a8. Accessed 2022 April 22.

19.Health Canada. Good Machine Learning Practice for Medical Device Development: Guiding Principles. 2021: https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/good-machine-learning-practice-medical-device-development.html. Accessed 8 March 2022.

20.U.S. Food and Drug Administration. Good Machine Learning Practice for Medical Device Development: Guiding Principles. 2021: https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles. Accessed 8 March 2022.

21.U.S. Food and Drug Administration. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. 2019: https://www.fda.gov/media/122535/download. Accessed 8 March 2022.

22.U.S. Food and Drug Administration. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan 2021: https://www.fda.gov/media/145022/download. Accessed 8 March 2022.

23.U.S. Food and Drug Administration. Digital Health Software Precertification (Pre-Cert) Program. 2021: https://www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-software-precertification-pre-cert-program. Accessed 8 March 2022.

24.Nakano S. Policies to Promote Development of AI-Based Medical Devices in Japan. 2021: https://globalforum.diaglobal.org/issue/november-2021/policies-to-promote-development-of-ai-based-medical-devices-in-japan/. Accessed April 22, 2022.

25.Kusakabe T. Regulatory Updates on Medical Devices in Japan - Amendment of Pharmaceuticals and Medical Devices Act (PMD Act). 2021: https://www.imdrf.org/sites/default/files/docs/imdrf/final/meetings/imdrf-meet-210316-korea-webconference-japan.pdf. Accessed 2022 April 22.

26.European Commission. Proposal for a Regulation laying down harmonised rules on artificial intelligence. 2021: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence. Accessed 8 March 2022.

27.Vokinger KN, Gasser U. Regulating AI in medicine in the United States and Europe. Nature machine intelligence. 2021;3(9):738-739. PubMed

28.Medicines & Healthcare products regulatory agency (UK). Good Machine Learning Practice for Medical Device Development: Guiding Principles. 2021: https://www.gov.uk/government/publications/good-machine-learning-practice-for-medical-device-development-guiding-principles/good-machine-learning-practice-for-medical-device-development-guiding-principles. Accessed 8 March 2022.

29.Medicines & Healthcare products regulatory agency (UK). Guidance Software and AI as a Medical Device Change Programme. 2021: https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme. Accessed 8 March 2022.

30.International Organization for Standardization. Standards by ISO/IEC JTC 1/SC 42 - Artificial intelligence. 2022: https://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0%7D. Accessed 8 March 2022.

31.Vervoort D, Tam DY, Wijeysundera HC. Health technology assessment for cardiovascular digital health technologies and artificial intelligence: why is it different? Canadian Journal of Cardiology. 2021. PubMed

32.Love-Koh J, Peel A, Rejon-Parrilla JC, et al. The future of precision medicine: potential impacts for health technology assessment. Pharmacoeconomics. 2018;36(12):1439-1451. PubMed

33.Haverinen J, Keränen N, Falkenbach P, Maijala A, Kolehmainen T, Reponen J. Digi-HTA: health technology assessment framework for digital healthcare services. Finnish Journal of eHealth and eWelfare. 2019;11(4):326–341-326–341.

34.Alami H, Lehoux P, Auclair Y, et al. Artificial intelligence and health technology assessment: anticipating a new level of complexity. J Med Internet Res. 2020;22(7):e17707. PubMed

35.Gerhards H, Weber K, Bittner U, Fangerau H. Machine Learning Healthcare Applications (ML-HCAs) are no stand-alone systems but part of an ecosystem–A broader ethical and health technology assessment approach is needed. The American Journal of Bioethics. 2020;20(11):46-48. PubMed

36.NICE. Evidence standards framework for digital health technologies. 2021: https://www.nice.org.uk/corporate/ecd7/resources/evidence-standards-framework-for-digital-health-technologies-pdf-1124017457605. Accessed 8 March 2022.

37.NICE. Evidence standards framework (ESF) for digital health technologies update - consultation. 2022: https://www.nice.org.uk/about/what-we-do/our-programmes/evidence-standards-framework-for-digital-health-technologies/esf-consultation. Accessed 2022 April 13.

38.Haute Autorité de Sante. LPPR: Dossier submission to the Medical Device and Health Technology Evaluation Committee (CNEDiMTS). 2020: https://www.has-sante.fr/upload/docs/application/pdf/2020-10/guide_dm_vf_english_publi.pdf. Accessed 9 March 2022.

39.ISPOR. South Korean Government Released an Additional ‘Assessment Guideline for National Health Insurance (NHI) Coverage Eligibility of Innovative Medical Technology’. 2021: https://press.ispor.org/asia/index.php/2021/02/05/south-korean-government-released-an-additional-assessment-guideline-for-national-health-insurance-nhi-coverage-eligibility-of-innovative-medical-technology/. Accessed 8 March 2022.

40.Reddy S, Rogers W, Makinen V-P, et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ health & care informatics. 2021;28(1). PubMed

41.Omoumi P, Ducarouge A, Tournier A, et al. To buy or not to buy—evaluating commercial AI solutions in radiology (the ECLAIR guidelines). European radiology. 2021;31(6):3786-3796. PubMed

42.Abràmoff MD, Tobey D, Char DS. Lessons learned about autonomous AI: finding a safe, efficacious, and ethical path through the development process. American journal of ophthalmology. 2020;214:134-142. PubMed

43.World Health Organization. Ethics and governance of artificial intelligence for health. 2021: https://www.who.int/publications/i/item/9789240029200. Accessed 8 March 2022.

44.Chen IY, Pierson E, Rose S, Joshi S, Ferryman K, Ghassemi M. Ethical machine learning in healthcare. Annual Review of Biomedical Data Science. 2021;4:123-144. PubMed

45.Char DS, Abràmoff MD, Feudtner C. Identifying ethical considerations for machine learning healthcare applications. The American Journal of Bioethics. 2020;20(11):7-17. PubMed

46.Morley J, Machado CC, Burr C, et al. The ethics of AI in health care: a mapping review. Social Science & Medicine. 2020;260:113172. PubMed

47.Da Silva M, et al. AI & health care: a fusion of law & science an introduction to the issues. 2021: https://cifar.ca/wp-content/uploads/2021/03/210218-ai-and-health-care-law-and-science-v8-AODA.pdf. Accessed 8 March 2022.