Advertisement

Current challenges of implementing artificial intelligence in medical imaging

  • Shier Nee Saw
    Correspondence
    Corresponding author at: Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia.
    Affiliations
    Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
    Search for articles by this author
  • Kwan Hoong Ng
    Affiliations
    Department of Biomedical Imaging, Universiti Malaya, 50603 Kuala Lumpur, Malaysia

    Department of Medical Imaging and Radiological Sciences, College of Health Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
    Search for articles by this author

      Highlights

      • Challenges: Data governance, algorithm robustness, stakeholder consensus and legal liability.
      • General Data Privacy Regulation has been published to ensure high quality of data governance.
      • Model transparency, robustness and fairness are important for AI development to increase trust.
      • WHO and FDA published ethical AI technology regulatory framework to ensure safety of the AI system.

      Abstract

      The idea of using artificial intelligence (AI) in medical practice has gained vast interest due to its potential to revolutionise healthcare systems. However, only some AI algorithms are utilised due to systems’ uncertainties, besides the never-ending list of ethical and legal concerns. This paper intends to provide an overview of current AI challenges in medical imaging with an ultimate aim to foster better and effective communication among various stakeholders to encourage AI technology development. We identify four main challenges in implementing AI in medical imaging, supported with consequences and past events when these problems fail to mitigate. Among them is the creation of a robust AI algorithm that is fair, trustable and transparent. Another issue is on data governance, in which best practices in data sharing must be established to promote trust and protect the patients’ privacy. Next, stakeholders, such as the government, technology companies and hospital management, should come to a consensus in creating trustworthy AI policies and regulatory frameworks, which is the fourth challenge, to support, encourage and spur innovation in digital AI healthcare technology. Lastly, we discussed the efforts of various organizations such as the World Health Organisation (WHO), American College of Radiology (ACR), European Society of Radiology (ESR) and Radiological Society of North America (RSNA), who are already actively pursuing ethical developments in AI. The efforts by various stakeholders will eventually overcome hurdles and the deployment of AI-driven healthcare applications in clinical practice will become a reality and hence lead to better healthcare services and outcomes.

      Keywords

      Introduction

      There is a veritable surge of interest in the development of artificial intelligence (AI) in medical imaging to help detect diseases and provide a prognosis [
      • Avanzo M.
      • Porzio M.
      • Lorenzon L.
      • Milan L.
      • Sghedoni R.
      • Russo G.
      • et al.
      Artificial intelligence applications in medical imaging: A review of the medical physics research in Italy.
      ]. A subset of AI known as “deep learning”, which uses convolutional neural networks, has shown outstanding performance in clinical diagnosis, improving hospital workflow efficiency and completing many administrative tasks [
      • Davenport T.
      • Kalakota R.
      The potential for artificial intelligence in healthcare.
      ]. Healthcare information technology (IT) systems with AI technology unquestionably reduce personnel burden in healthcare institutions, free up clinicians’ time to provide better care and divert resources efficiently. Research has also focused on improving image reconstruction from sparse data, improving image resolution with reduced radiation dose and reducing image acquisition time using deep learning [
      • Harvey H.
      • Topol E.J.
      More than meets the AI: refining image acquisition and resolution.
      ,
      • Wang S.
      • Cao G.
      • Wang Y.
      • Liao S.
      • Wang Q.
      • Shi J.
      • et al.
      Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. Frontiers.
      ]. For example, it was shown that using deep learning image reconstruction algorithms, the image quality reconstructed with 36.2% of radiation dose was better than the image constructed with 50.0% of radiation dose [
      • Lee J.E.
      • Choi S.-Y.
      • Hwang J.A.
      • Lim S.
      • Lee M.H.
      • Yi B.H.
      • et al.
      The potential for reduced radiation dose from deep learning-based CT image reconstruction: A comparison with filtered back projection and hybrid iterative reconstruction using a phantom.
      ]. The public is enthralled by the amazing capabilities of AI, in which some models may even outperform the skill of clinicians/radiologists. However, a recent systematic review of a total of 62 AI models that developed to detect COVID-19 using X-ray and CT images have, unfortunately, shown that none could be translated into clinical use due to methodological flaws [
      • Roberts M.
      • Driggs D.
      • Thorpe M.
      • Gilbey J.
      • Yeung M.
      • Ursprung S.
      • et al.
      Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans.
      ].
      Despite the enormous potential shown by AI in research, its deployment in the real world is still limited. Transforming healthcare with AI is not a trivial task, it requires various stakeholders to come together and resolve ethical concerns before it reaches clinical settings. There has been active discussion about ethical adoption in AI technology in healthcare applications [
      • Char D.S.
      • Shah N.H.
      • Magnus D.
      Implementing Machine Learning in Health Care - Addressing Ethical Challenges.
      ] including protocols in managing patient data during AI development, integration of AI technology [

      Mudgal KS, Das N. The ethical adoption of artificial intelligence in radiology. BJR|Open. 2019;2:20190020.

      ], legal and regulation [
      • Schönberger D.
      Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications.
      ]. Previous papers discuss ethical issues at a high level while we, in this paper, aim to provide a more comprehensive understanding to readers by deliberating on the pitfalls of current state-of-the-art AI algorithm in medical imaging, cybersecurity breaching in healthcare, various stakeholders’ views on AI adoption in healthcare, and guidelines published from various organizations to support AI technology development. We hope, through this reading, all stakeholders such as computer scientists, radiologists, policymakers and developers gain comprehension and knowledge about AI in medical imaging, which thus enable fostering better and effective communication for AI technology development in medical imaging. Implementing AI in clinical settings is a long-haul process, starting from data acquisition, AI model design and development, AI model validation supported with clinical evidence, approval from regulatory bodies and then only move into clinical implementation. Various challenges exist in every process. We devised the paper into four sections, in which each section represents the challenge need to be overcame before AI entering clinical settings. Fig. 1 shows the four challenges. The last section of the paper describes current guidelines in supporting ethical AI development.
      Figure thumbnail gr1
      Fig. 1Challenges to overcome before AI systems may have clinical application in medical imaging.

      Data governance

      Current approaches in AI are notoriously data-hungry and require medical data and metadata to develop a robust AI algorithm. Many AI studies adopted a retrospective approach to carry out AI research to leverage large datasets from the database, while a prospective approach is adopted to validate the developed model. For the retrospective study, waiver consent is being implemented while patient consent is required for a prospective study. In 2014, it was suggested to implement an open medical database to facilitate AI technologies developments, however, such suggestion faced a strong rejection by Europeans due to privacy concerns [
      • Forcier M.B.
      • Gallois H.
      • Mullan S.
      • Joly Y.
      Integrating artificial intelligence into health care through data access: can the GDPR act as a beacon for policymakers?.
      ]. The Royal Free NHS Foundation Trust was accused of breaching the UK Data Protection Act 1998 when it provided the data of 1.6 million patients to Google for its DeepMind project without consent [

      Royal Free - Google DeepMind trial failed to comply with data protection law. Available at https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law. Accessed at July 3, 2021: Information Commmisioner's Office.

      ]. A study conducted in the UK showed that the public is more comfortable sharing their health data with the government sector and university but reserved to share their data with commercial companies [
      • Aggarwal R.
      • Farag S.
      • Martin G.
      • Ashrafian H.
      • Darzi A.
      Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey.
      ]. However, if the data is made anonymously, participants are more willing to share their data [
      • Jutzi T.B.
      • Krieghoff-Henning E.I.
      • Holland-Letz T.
      • Utikal J.S.
      • Hauschild A.
      • Schadendorf D.
      • et al.
      Artificial Intelligence in Skin Cancer Diagnostics: The Patients' Perspective.
      ]. The public is anxious because they are not aware of how their data is being used. Improper data sharing to certain parties may cause major harm to people and result in unintended consequences. For example, insurance companies may misuse healthcare data to manipulate coverage, while pharma and biotech companies are known to pay money to collect genetic details of individuals [

      Véliz C. Wellcome Trust–Funded Monographs and Book Chapters. Medical privacy and big data: A further reason in favour of public universal healthcare coverage. In: de Campos TC, Herring J, Phillips AM, editors. Philosophical Foundations of Medical Law. Oxford (UK): Oxford University Press © Carissa Véliz 2019.; 2019.

      ].
      Cybersecurity in hospitals is the weakest compared with other industries because these institutions tend to invest very little in network system security [
      • Kruse C.S.
      • Frederick B.
      • Jacobson T.
      • Monticone D.K.
      Cybersecurity in healthcare: A systematic review of modern threats and trends.
      ]. The incidence of cyberattacks in healthcare has risen by 67% in Europe [

      Holloway S. Irish cyberattack provides wake-up call for European imaging IT. Available at: https://www.auntminnieeurope.com/index.aspx?sec=sup&sub=pac&pag=dis&ItemID=620205. Accessed May 25, 2021: AuntMinnieEurope.

      ] and 55% in the US [

      Vaidya A. Report: Healthcare data breaches spiked 55% in 2020. Available at: https://medcitynews.com/2021/02/report-healthcare-data-breaches-spiked-55-in-2020/. Accessed February 17, 2021: MedCityNews.

      ] in 2020 compared with the year before. In 2018, Singapore’s healthcare system was hacked and the prime minister’s private details were compromised [

      BBCNews. Singapore personal data hack hits 1.5m, health authority says. Available at https://www.bbc.com/news/world-asia-44900507. Accessed at July 20, 2018.

      ]. Networks in Malaysia and Ireland have also been targeted by hackers [

      Holloway S. Irish cyberattack provides wake-up call for European imaging IT. Available at: https://www.auntminnieeurope.com/index.aspx?sec=sup&sub=pac&pag=dis&ItemID=620205. Accessed May 25, 2021: AuntMinnieEurope.

      ,

      McMillan MEaR. Cyberattacks Cost Hospitals Millions During Covid-19. Available at: https://www.wsj.com/articles/cyberattacks-cost-hospitals-millions-during-covid-19-11614346713. Accessed at February 26, 2021: The Wall Street Journal; 2021.

      ]. Recently, the UMass Memorial Health system was hit by ransomware and more than 200 k patients' information was stolen [

      Massachusetts Health Network Hacked; Patient Info Exposed. . SECURITYWEEK; 2021.

      ]. Such attacks are not only an invasion of patients’ privacy but also jeopardise hospital operations that place patients’ health at risk. For instance, the WannaCry attack in the United Kingdom National Health System has delayed critical treatments for many patients as their medical records have been encrypted by ransomware [
      • Millard W.B.
      Where bits and bytes meet flesh and blood: Hospital responses to malware attacks.
      ]. Surprisingly, it was reported that human error is the main cause of data compromise, as demonstrated by the incident in the University Hospitals of Geneva (HUG) [

      Wagner S. The medical data of hundreds of HUG patients accessible on the internet Available at: https://www.ictjournal.ch/news/2019-10-04/les-donnees-medicales-dune-centaines-de-patients-des-hug-accessibles-sur-internet. Accessed at October 4, 2019: ICTjournal; 2019.

      ]. Conducting training to raise awareness among all healthcare users about the basic digital-hygiene practices could reduce data privacy breaches incidents [
      • Argaw S.T.
      • Troncoso-Pastoriza J.R.
      • Lacey D.
      • Florin M.-V.
      • Calcavecchia F.
      • Anderson D.
      • et al.
      Cybersecurity of Hospitals: discussing the challenges and working towards mitigating the risks.
      ].
      Data governance includes how data is stored and secured, how long is the data is kept, how does the data quality is maintained, who has access to the database, what are the policies and protocols in ensuring patients’ data confidentiality is safeguarded. Regulations such as General Data Privacy Regulation (GDPR) and California Consumer Privacy Act (CCPA) were just published to regulate AI via ensuring data is utilized appropriately. It is well known that the quality of image and labelling is vital in the development and validation of AI algorithms. The role of medical physicists, thus, is significant to ensure equipment and image acquisition protocols are appropriate to ensure image acquired complied with standards for downstream AI model development. Moreover, radiologists’ and physicians’ expertise are necessary for accurate diagnosis and labelling. A vigorous data governance protocol is thus necessary and crucial not only to protect patients’ privacy but also to warrant data quality to develop a transparent, reliable and fair AI system. Given this, we may also have to consider that the cost of AI development and implementation would be high as laborious back-end work is involved.

      Algorithm robustness

      Deep learning, which is a subset of AI algorithms, is widely used in medical imaging. The idea of deep learning is designed in a way in which the model consists of many layers where every layer performs mathematics operations such as convolutions, max-pooling and normalizations to extract features from data (radiology image/electronic medical record). Through these operations, useful features that aids in prediction can automatically be identified. A comprehensive review of the state-of-the-art deep learning algorithms in medical imaging applications can be found in [
      • Barragán-Montero A.
      • Javaid U.
      • Valdés G.
      • Nguyen D.
      • Desbordes P.
      • Macq B.
      • et al.
      Artificial intelligence and machine learning for medical imaging: A technology review.
      ,
      • Castiglioni I.
      • Rundo L.
      • Codari M.
      • Di Leo G.
      • Salvatore C.
      • Interlenghi M.
      • et al.
      AI applications to medical images: From machine learning to deep learning.
      ,
      • Manco L.
      • Maffei N.
      • Strolin S.
      • Vichi S.
      • Bottazzi L.
      • Strigari L.
      Basic of machine learning and deep learning in imaging for medical physicists.
      ]. Although many deep learning models have been proposed for disease screening/diagnosis using X-rays, CT or magnetic resonance imaging images, very few are adopted in clinical settings.
      Previous studies have shown that deep learning algorithms are vulnerable to distortions and may produce erroneous predictions [
      • Ma X.
      • Niu Y.
      • Gu L.
      • Wang Y.
      • Zhao Y.
      • Bailey J.
      • et al.
      Understanding adversarial attacks on deep learning based medical image analysis systems.
      ]. Distortion refers to when inputs (image/audio/texts/other) are altered, the ‘modified’ inputs thus providing misleading information to deep learning algorithm and lead to inaccurate decisions [
      • Ma X.
      • Niu Y.
      • Gu L.
      • Wang Y.
      • Zhao Y.
      • Bailey J.
      • et al.
      Understanding adversarial attacks on deep learning based medical image analysis systems.
      ,

      Thys S, Van Ranst W, Goedemé T. Fooling automated surveillance cameras: adversarial patches to attack person detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops2019. p. 0-.

      ]. When random noise or patches is added to images, the prediction from the AI model results in opposite classes [
      • Akhtar N.
      • Mian A.
      Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey.
      ]. This scenario is known as an adversarial attack in the field of computer science. One famous example is Microsoft’s Tay, a Twitter chat box, which is built for conversational purposes. When netizens flooded Tay with offensive words, it turned Tay’s tone and started to generate offensive statements. Microsoft shut down Tay in 16 h after launching [
      • Hiter S.
      What Is Adversarial Machine Learning?.
      ]. If such an adversarial attack occurs in the healthcare system, it could lead to catastrophic consequences that may threaten patients’ life. Furthermore, adversarial attacks could also make the fabrication of false medical billings, claims or other deceptive scenarios possible [
      • Finlayson S.G.
      • Bowers J.D.
      • Ito J.
      • Zittrain J.L.
      • Beam A.L.
      • Kohane I.S.
      Adversarial attacks on medical machine learning.
      ].
      According to a report by Fair Isaac Corporation (FICO), an analytic data company, 65% of companies are unable to give an explanation how the AI algorithm reaches its’ prediction or decision [
      • Fico
      New Report from Corinium and FICO Finds that Lack of Urgency Around Responsible AI Use is Putting Most Companies at Risk.
      ]. Most deep learning algorithms are designed to have deep architectures and features are expressed in high dimensions that are highly non-linear, which makes their inner mechanism difficult to dissect and understand [
      • Price W.N.
      Medical Malpractice and Black-Box Medicine.
      ]. In medicine, clinicians can provide evidence and explanation as to how he/she comes to a certain diagnosis. Unfortunately, such reasoning is lacking in the deep learning algorithm. A recent study showed that that deep learning algorithm might not be learning the genuine medical pathology but other confounding features such as text in the image [
      • Zech J.R.
      • Badgeley M.A.
      • Liu M.
      • Costa A.B.
      • Titano J.J.
      • Oermann E.K.
      Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.
      ]. The “non-explainability” nature of deep learning algorithms discourages and deters clinicians from adopting AI systems. Explainable AI (XAI), an emerging field in AI medicine, is attempting to decipher the reasoning and logic behind the algorithms [
      • Gunning D.
      • Stefik M.
      • Choi J.
      • Miller T.
      • Stumpf S.
      • Yang G.Z.
      XAI-Explainable artificial intelligence. Science.
      ]. The common technique in XAI is the attention/saliency map which shows the regions that are deemed to be important for predictions [
      • Lee H.
      • Yune S.
      • Mansouri M.
      • Kim M.
      • Tajmir S.H.
      • Guerrier C.E.
      • et al.
      An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets.
      ]. However, attention maps sometimes will highlight illogical regions. For example, in the COVID-19 radiograph, the attention map highlights regions outside the lung that deem to be important for predicting COVID-19 [
      • DeGrave A.J.
      • Janizek J.D.
      • Lee S.-I.
      AI for radiographic COVID-19 detection selects shortcuts over signal.
      ].
      Real-world data intrinsically exhibit bias and using such data to train the model, inevitably, the model may be biased towards a certain group of population. This has been shown in a commercial algorithm that is widely used to guide healthcare decisions in the US where it was found to be discriminative against Black patients [
      • Obermeyer Z.
      • Powers B.
      • Vogeli C.
      • Mullainathan S.
      Dissecting racial bias in an algorithm used to manage the health of populations.
      ]. In fact, according to a global survey, model fairness is the most ubiquitous principle in developing AI systems [
      • Jobin A.
      • Ienca M.
      • Vayena E.
      The global landscape of AI ethics guidelines.
      ] to avoid unfair outcomes based on race and socioeconomic class [
      • Venkatraman V.
      Bias in the machines.
      ,
      • Parikh R.B.
      • Teeple S.
      • Navathe A.S.
      Addressing bias in artificial intelligence in health care.
      ,
      • Zou J.
      • Schiebinger L.
      AI can be sexist and racist - it's time to make it fair.
      ]. Furthermore, AI models were found to have poor generalisability [
      • Zech J.R.
      • Badgeley M.A.
      • Liu M.
      • Costa A.B.
      • Titano J.J.
      • Oermann E.K.
      Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.
      ]. When input data from different devices/institutions/regions were used, model performance reduced significantly. AI model developed by IBM Corp to provide recommendations on cancer treatment did not perform well in poor and middle-income regions, like India and Southeast Asia, due to variation of medical practices between developing and developed nations [

      Lohr S. What Ever Happened to IBM’s Watson? : Available at: https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html. Accessed July 17, 2021.

      ].
      During AI development, data often requires pre-processing before training the AI model. Generally, data will be split into training and testing datasets in which the testing data will be used to evaluate the model performance. It is critical to provide clear descriptions of the data source, data characteristics and data handling process to ensure sufficient transparency. 85% of the international AI challenges did not provide data source information [
      • Maier-Hein L.
      • Eisenmann M.
      • Reinke A.
      • Onogur S.
      • Stankovic M.
      • Scholz P.
      • et al.
      Why rankings of biomedical image analysis competitions should be interpreted with care.
      ] and improper splitting of the data will result in the over-optimistic performance of up to 41% [
      • Samala R.K.
      • Chan H.-P.
      • Hadjiiski L.
      • Koneru S.
      Hazards of data leakage in machine learning: a study on classification of breast cancer using deep neural networks.
      ].
      Data source reporting is thus crucial to ensure model transparency and enable evaluation of the fairness and generalisability that may assist in comprehending the potential benefits and harm of the AI systems Recently, the Radiological Society of North America (RSNA) has released a “Checklist for AI in Medical Imaging” (CLAIM) to promote standardised reporting. This checklist is supposed to be viewed as a “best practice” for authors and reviewers in future journal publications to improve the quality of AI papers in medical imaging [

      Mongan J, Moy L, Kahn CE. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiology: Artificial Intelligence. 2020;2:e200029.

      ]. With clear model transparency, it will also empower clinicians to assess the reliability of results produced by the algorithms.

      Stakeholder consensus

      Understanding the concerns and needs from different stakeholders’ perspectives is an important aspect to drive value creation to the healthcare system and social benefits. Among the stakeholders involved are those who directly deal with patients, such as radiologists, clinicians and healthcare workers, and those who work “behind the scenes” like hospital managers and policymakers (lawyers/ethicists/computer scientists/government).
      Sun and Medaglia published a report on the opinion of three main stakeholders in China (government policymakers, hospital management and IT firm managers) to elucidate the challenges of AI adoption in the healthcare system [
      • Sun T.Q.
      • Medaglia R.
      Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare.
      ]. Their findings indicate that different stakeholders recognised AI challenges differently. IT firm managers are worried about the data being misused for other commercial purposes, while government policymakers and hospital management are concerned about the trustworthiness of the AI algorithm [
      • Sun T.Q.
      • Medaglia R.
      Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare.
      ]. In Europe, a short survey by Güngör showed that business entities have positive feedback towards AI deployment but society perceived AI as bringing more risk than value [
      • Güngör H.
      Creating Value with Artificial Intelligence: A Multi-stakeholder Perspective.
      ]. Puaschunder conducted a survey, on a group of stakeholders consisting of academia, business, healthcare professionals, public and legal representations, to investigate the use of AI, robotics and big data in healthcare. They showed that all parties perceived positively towards the introduction of AI and big data and neutral towards policy recommendations [

      Puaschunder JM. Stakeholder perspectives on Artificial Intelligence (AI), robotics and big data in healthcare: An empirical study. Stakeholder Perspectives on Artificial Intelligence (AI), Robotics and Big Data in Healthcare: An Empirical Study (December 3, 2019). 2019.

      ].
      Failure in implementing AI systems in hospitals was largely due to the lack of socio-technical consideration instead of technical factors [
      • Lluch M.
      Healthcare professionals' organisational barriers to health information technologies-a literature review.
      ]. Clinicians are concerned about the loss of autonomy and the capacity to make critical decisions due to a lack of legal and policy framework on data privacy and accountability [
      • Ford E.W.
      • Menachemi N.
      • Peterson L.T.
      • Huerta T.R.
      Resistance is futile: but it is slowing the pace of EHR adoption nonetheless.
      ]. Surveys showed that more than 50% of radiologists have limited knowledge of AI [
      • Collado-Mesa F.
      • Alvarez E.
      • Arheart K.
      The Role of Artificial Intelligence in Diagnostic Radiology: A Survey at a Single Radiology Residency Training Program.
      ,
      • Waymel Q.
      • Badr S.
      • Demondion X.
      • Cotten A.
      • Jacques T.
      Impact of the rise of artificial intelligence in radiology: What do radiologists think?.
      ]. Similar findings were also reported in Yang et al. in which their survey showed that most stakeholders expressed education on AI is needed to enhance collaborative work in AI implementation [
      • Yang L.
      • Ene I.C.
      • Arabi Belaghi R.
      • Koff D.
      • Stein N.
      • Santaguida P.
      Stakeholders’ perspectives on the future of artificial intelligence in radiology: a scoping review.
      ]. Furthermore, some studies reported that healthcare workers are hesitant to accept new technologies due to the fear of altering their routine that may lead to increased workload [
      • Neville R.
      • Marsden W.
      • McCowan C.
      • Pagliari C.
      • Mullen H.
      • Fannin A.
      A survey of GP attitudes to and experiences of email consultations.
      ,
      • Car J.
      • Sheikh A.
      Email consultations in health care: 1—scope and effectiveness.
      ,
      • Car J.
      • Sheikh A.
      Email consultations in health care: 2—acceptability and safe application.
      ,
      • Coppola F.
      • Faggioni L.
      • Regge D.
      • Giovagnoni A.
      • Golfieri R.
      • Bibbolino C.
      • et al.
      Artificial intelligence: radiologists’ expectations and opinions gleaned from a nationwide online survey.
      ]. At the same time, medical physicists express strong interest in improving their AI skills [
      • Diaz O.
      • Guidi G.
      • Ivashchenko O.
      • Colgan N.
      • Zanca F.
      Artificial intelligence in the medical physics community: An international survey.
      ]. EFOMP recently published a paper discussing the aspects in which how medical physicists can equip themselves to familiarize themselves with AI technology [
      • Kortesniemi M.
      • Tsapaki V.
      • Trianni A.
      • Russo P.
      • Maas A.
      • Källman H.-E.
      • et al.
      The European Federation of Organisations for Medical Physics (EFOMP) White Paper: Big data and deep learning in medical imaging and in relation to medical physics profession.
      ] and effort has been made to introduce AI in medical physics curriculum [
      • Ng K.H.
      • Wong J.H.D.
      A clarion call to introduce artificial intelligence (AI) in postgraduate medical physics curriculum.
      ].
      Technology is advancing faster than others such as legal matters that they cannot keep up. However, to drive AI technology into clinical settings, efforts from all stakeholders are inevitable to develop a holistic approach in designing a proper legal AI regulation and policies to safeguard individual empowerment and equity. Recently, Lebcir et al. started gathering perspectives from different organisations in affecting AI adoption in healthcare, which is still ongoing [

      Lebcir R, Hill T, Atun R, Cubric M. Stakeholders' views on the organisational factors affecting application of artificial intelligence in healthcare: a scoping review protocol. BMJ Open. 2021;11:e044074-e.

      ]. The findings from their review may eventually help to identify organisational factors that may improve AI implementation in healthcare settings.

      Legal liability

      Traditionally, under tort law, patients may sue for monetary compensation if they are harmed or injured due to the negligence of medical practitioners. However, medical device companies are protected from such action due to the “learned intermediary doctrine”. As such, the onus is on clinicians to fully understand the potential risk that entails the use of medical devices because they have the capability to weigh the risk and benefits of using those devices. Failure to properly assess the use of a medical device may become a basis for patients to take legal action in cases of medical negligence [
      • Sullivan H.R.
      • Schweikart S.J.
      Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?.
      ]. Note that if clinicians followed the standards of providing care, they will likely not be liable for the outcome of patients.
      The potential of AI in revolutionizing the healthcare system is enormous and centuries-old legal and regulation must keep pace with the technological advancement. Given the non-explainability and non-transparency of AI models, it is difficult to identify where is the fault originated from when erroneous decisions are made by the algorithm and cause adverse impacts on patients’ health. When lawsuits are filed by the patients, legal questions like this example have been raised – “Who will be responsible when there is medical malpractice due to the defective AI technology?”.

      Current guidelines for ethical AI development

      Many organisations have published guidelines to support the ethical development of AI in healthcare settings. The World Health Organisation (WHO) has issued its first guidelines in using AI in healthcare entitled “The Ethics and Governance of Artificial Intelligence in Health Care” [

      WHO. Ethics and governance of artificial intelligence for health: WHO guidance. Available at: https://wwwwhoint/publications/i/item/9789240029200 Accessed at January 18, 2021.

      ]. The United States’ Food and Drug Administration has also published its guidelines entitled Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device – Discussion Paper and Request for Feedback” to support the development of AI algorithms as a medical device [

      FDA. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. Available at: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device. Accessed at Dec 01, 2021.

      ]. Medical institutions in Europe and North America (the American College of Radiology, the European Society of Radiology, the Radiological Society of North America, the Society for Imaging Informatics in Medicine, the European Society of Medical Imaging Informatics, the Canadian Association of Radiologists and the American Association of Physicists in Medicine) have published a joint statement relating to AI ethics in radiology, in which they highlighted that AI should be used to help patients for the common good and not be used for commercial purposes [
      • Geis J.R.
      • Brady A.
      • Wu C.C.
      • Spencer J.
      • Ranschaert E.
      • Jaremko J.L.
      • et al.
      Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement.
      ]. Australia [
      • Dawson D.
      • Schleiger E.
      • Horton J.
      • McLaughlin J.
      • Robinson C.
      • Quezada G.
      • et al.
      Artificial Intelligence: Australia’s Ethics Framework.
      ], Canada [

      Davidson K. CIFAR and the French National Centre for Scientific Research (CNRS) establish CAD $1M research agreement. Available at: https://cifar.ca/cifarnews/2021/05/12/cifar-and-the-french-national-centre-for-scientific-research-cnrs-establish-cad-1m-research-agreement/. Accessed May 12, 2021. ].

      ], France [

      Commision E. France AI Strategy Report. Available at https://knowledge4policy.ec.europa.eu/ai-watch/france-ai-strategy-report_en. Accessed June 4, 2021.: European Commision; 2018.

      ], and the UK [

      Centre for Data Ethics and Innovation Consultation. Available at: https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation/centre-for-data-ethics-and-innovation-consultation. Accessed June 5, 2021.

      ] have also engaged in the development of a national AI ethics framework.
      The European Union has published a guideline for the ethical development of AI and summarised seven requirements [

      James V. AI systems should be accountable, explainable, and unbiased, says EU. Available at: https://www.theverge.com/2019/4/8/18300149/eu-artificial-intelligence-ai-ethical-guidelines-recommendations. Accessed Apr 8, 2019.

      ]. The requirements are:
      • (i)
        Humans should have autonomy over the system and intervention is possible to monitor the decisions made by the system;
      • (ii)
        Robust and reliable algorithm. The AI system should be able to withstand adversarial attacks;
      • (iii)
        Ensuring data privacy and good governance;
      • (iv)
        Model transparency. The data and algorithms used to create an AI system should be transparent, and the decisions made by the systems are justifiable.
      • (v)
        Fair output and results, in which all decisions made by an AI system must be diverse and not biased;
      • (vi)
        Environmentally sustainable, where the output must take ecological and societal well-being into account to enhance positive social change; and,
      • (vii)
        Model accountability, in which AI systems should be auditable.

      Conclusion

      We have seen many breath-taking AI applications that show encouraging results in terms of diagnosis, prognosis and facilitation of clinical workflow. However, we need to be cognisant of the fact that AI in medical imaging must overcome the challenges but not limited to those outlined in this paper before it can be implemented in clinical practice. With the contributions of researchers, novel, robust, secure, and safe AI innovations will continue to emerge.
      Medical physicists apply knowledge of physics in medicine and have expertise in computing. They have contributed to computerization and advancing digital techniques in imaging and therapy. In AI, they too could contribute to the design and develop a relevant and appropriate AI model to ensure data quality, model fairness and model transparency. They could advise and collaborate with clinicians in AI. Furthermore, the proposal of including AI in the medical physics curriculum will be beneficial for better preparing the next generation of medical physicists to design and develop a clinically relevant AI model that can create impact.
      Various organisations, such as WHO and FDA, are also making proactive efforts in the pursuit of developing trustworthy AI policies and regulatory frameworks to foster trust and adoption of AI in society. There are already 29 FDA-approved AI-based medical technologies [
      • Benjamens S.
      • Dhunnoo P.
      • Meskó B.
      The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database.
      ]. Undoubtedly, we will be seeing more AI inventions in the near future and when AI algorithm is applied in clinics, there will be a paradigm shift in healthcare management. AI-driven healthcare technology will certainly empower radiologists and clinicians to be more efficient and has the potential to improve human well-being.

      Funding

      This research was supported by Dr. Ranjeet Bhagwan Singh Grant (GA017, PI: Saw).

      Declaration of Competing Interest

      The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

      References

        • Avanzo M.
        • Porzio M.
        • Lorenzon L.
        • Milan L.
        • Sghedoni R.
        • Russo G.
        • et al.
        Artificial intelligence applications in medical imaging: A review of the medical physics research in Italy.
        Physica Medica: European Journal of Medical Physics. 2021; 83: 221-241
        • Davenport T.
        • Kalakota R.
        The potential for artificial intelligence in healthcare.
        Future Healthc J. 2019; 6: 94-98
        • Harvey H.
        • Topol E.J.
        More than meets the AI: refining image acquisition and resolution.
        Lancet. 2020; 396: 1479
        • Wang S.
        • Cao G.
        • Wang Y.
        • Liao S.
        • Wang Q.
        • Shi J.
        • et al.
        Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. Frontiers.
        Radiology. 2021; 1
        • Lee J.E.
        • Choi S.-Y.
        • Hwang J.A.
        • Lim S.
        • Lee M.H.
        • Yi B.H.
        • et al.
        The potential for reduced radiation dose from deep learning-based CT image reconstruction: A comparison with filtered back projection and hybrid iterative reconstruction using a phantom.
        Medicine (Baltimore). 2021; 100: e25814
        • Roberts M.
        • Driggs D.
        • Thorpe M.
        • Gilbey J.
        • Yeung M.
        • Ursprung S.
        • et al.
        Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans.
        Nature Machine Intelligence. 2021; 3: 199-217
        • Char D.S.
        • Shah N.H.
        • Magnus D.
        Implementing Machine Learning in Health Care - Addressing Ethical Challenges.
        N Engl J Med. 2018; 378: 981-983
      1. Mudgal KS, Das N. The ethical adoption of artificial intelligence in radiology. BJR|Open. 2019;2:20190020.

        • Schönberger D.
        Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications.
        International Journal of Law and Information Technology. 2019; 27: 171-203
        • Forcier M.B.
        • Gallois H.
        • Mullan S.
        • Joly Y.
        Integrating artificial intelligence into health care through data access: can the GDPR act as a beacon for policymakers?.
        J Law Biosci. 2019; 6: 317-335
      2. Royal Free - Google DeepMind trial failed to comply with data protection law. Available at https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law. Accessed at July 3, 2021: Information Commmisioner's Office.

        • Aggarwal R.
        • Farag S.
        • Martin G.
        • Ashrafian H.
        • Darzi A.
        Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey.
        Journal of medical Internet research. 2021; 23: e26162
        • Jutzi T.B.
        • Krieghoff-Henning E.I.
        • Holland-Letz T.
        • Utikal J.S.
        • Hauschild A.
        • Schadendorf D.
        • et al.
        Artificial Intelligence in Skin Cancer Diagnostics: The Patients' Perspective.
        Front Med (Lausanne). 2020; 7: 233
      3. Véliz C. Wellcome Trust–Funded Monographs and Book Chapters. Medical privacy and big data: A further reason in favour of public universal healthcare coverage. In: de Campos TC, Herring J, Phillips AM, editors. Philosophical Foundations of Medical Law. Oxford (UK): Oxford University Press © Carissa Véliz 2019.; 2019.

        • Kruse C.S.
        • Frederick B.
        • Jacobson T.
        • Monticone D.K.
        Cybersecurity in healthcare: A systematic review of modern threats and trends.
        Technol Health Care. 2017; 25: 1-10
      4. Holloway S. Irish cyberattack provides wake-up call for European imaging IT. Available at: https://www.auntminnieeurope.com/index.aspx?sec=sup&sub=pac&pag=dis&ItemID=620205. Accessed May 25, 2021: AuntMinnieEurope.

      5. Vaidya A. Report: Healthcare data breaches spiked 55% in 2020. Available at: https://medcitynews.com/2021/02/report-healthcare-data-breaches-spiked-55-in-2020/. Accessed February 17, 2021: MedCityNews.

      6. BBCNews. Singapore personal data hack hits 1.5m, health authority says. Available at https://www.bbc.com/news/world-asia-44900507. Accessed at July 20, 2018.

      7. McMillan MEaR. Cyberattacks Cost Hospitals Millions During Covid-19. Available at: https://www.wsj.com/articles/cyberattacks-cost-hospitals-millions-during-covid-19-11614346713. Accessed at February 26, 2021: The Wall Street Journal; 2021.

      8. Massachusetts Health Network Hacked; Patient Info Exposed. . SECURITYWEEK; 2021.

        • Millard W.B.
        Where bits and bytes meet flesh and blood: Hospital responses to malware attacks.
        Ann Emerg Med. 2017; 70: A17-A21
      9. Wagner S. The medical data of hundreds of HUG patients accessible on the internet Available at: https://www.ictjournal.ch/news/2019-10-04/les-donnees-medicales-dune-centaines-de-patients-des-hug-accessibles-sur-internet. Accessed at October 4, 2019: ICTjournal; 2019.

        • Argaw S.T.
        • Troncoso-Pastoriza J.R.
        • Lacey D.
        • Florin M.-V.
        • Calcavecchia F.
        • Anderson D.
        • et al.
        Cybersecurity of Hospitals: discussing the challenges and working towards mitigating the risks.
        BMC Med Inf Decis Making. 2020; 20: 146
        • Barragán-Montero A.
        • Javaid U.
        • Valdés G.
        • Nguyen D.
        • Desbordes P.
        • Macq B.
        • et al.
        Artificial intelligence and machine learning for medical imaging: A technology review.
        Physica Medica: European Journal of Medical Physics. 2021; 83: 242-256
        • Castiglioni I.
        • Rundo L.
        • Codari M.
        • Di Leo G.
        • Salvatore C.
        • Interlenghi M.
        • et al.
        AI applications to medical images: From machine learning to deep learning.
        Physica Medica: European Journal of Medical Physics. 2021; 83: 9-24
        • Manco L.
        • Maffei N.
        • Strolin S.
        • Vichi S.
        • Bottazzi L.
        • Strigari L.
        Basic of machine learning and deep learning in imaging for medical physicists.
        Physica Medica: European Journal of Medical Physics. 2021; 83: 194-205
        • Ma X.
        • Niu Y.
        • Gu L.
        • Wang Y.
        • Zhao Y.
        • Bailey J.
        • et al.
        Understanding adversarial attacks on deep learning based medical image analysis systems.
        Pattern Recogn. 2021; 110107332
      10. Thys S, Van Ranst W, Goedemé T. Fooling automated surveillance cameras: adversarial patches to attack person detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops2019. p. 0-.

        • Akhtar N.
        • Mian A.
        Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey.
        IEEE Access. 2018; 6: 14410-14430
        • Hiter S.
        What Is Adversarial Machine Learning?.
        CIOInsight, CIOInsight2021
        • Finlayson S.G.
        • Bowers J.D.
        • Ito J.
        • Zittrain J.L.
        • Beam A.L.
        • Kohane I.S.
        Adversarial attacks on medical machine learning.
        Science. 2019; 363: 1287-1289
        • Fico
        New Report from Corinium and FICO Finds that Lack of Urgency Around Responsible AI Use is Putting Most Companies at Risk.
        CISION PR Newswire. 2021;
        • Price W.N.
        Medical Malpractice and Black-Box Medicine.
        in: Vayena E. Lynch H.F. Cohen I.G. Gasser U. Big Data, Health Law, and Bioethics. Cambridge University Press, Cambridge2018: 295-306
        • Zech J.R.
        • Badgeley M.A.
        • Liu M.
        • Costa A.B.
        • Titano J.J.
        • Oermann E.K.
        Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.
        PLoS Med. 2018; 15: e1002683
        • Gunning D.
        • Stefik M.
        • Choi J.
        • Miller T.
        • Stumpf S.
        • Yang G.Z.
        XAI-Explainable artificial intelligence. Science.
        Robotics. 2019; 4
        • Lee H.
        • Yune S.
        • Mansouri M.
        • Kim M.
        • Tajmir S.H.
        • Guerrier C.E.
        • et al.
        An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets.
        Nat Biomed Eng. 2019; 3: 173-182
        • DeGrave A.J.
        • Janizek J.D.
        • Lee S.-I.
        AI for radiographic COVID-19 detection selects shortcuts over signal.
        Nature Machine Intelligence. 2021; 3: 610-619
        • Obermeyer Z.
        • Powers B.
        • Vogeli C.
        • Mullainathan S.
        Dissecting racial bias in an algorithm used to manage the health of populations.
        Science. 2019; 366: 447-453
        • Jobin A.
        • Ienca M.
        • Vayena E.
        The global landscape of AI ethics guidelines.
        Nature Machine Intelligence. 2019; 1: 389-399
        • Venkatraman V.
        Bias in the machines.
        New Sci. 2020; 247: 30
        • Parikh R.B.
        • Teeple S.
        • Navathe A.S.
        Addressing bias in artificial intelligence in health care.
        JAMA. 2019; 322: 2377-2378
        • Zou J.
        • Schiebinger L.
        AI can be sexist and racist - it's time to make it fair.
        Nature. 2018; 559: 324-326
      11. Lohr S. What Ever Happened to IBM’s Watson? : Available at: https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html. Accessed July 17, 2021.

        • Maier-Hein L.
        • Eisenmann M.
        • Reinke A.
        • Onogur S.
        • Stankovic M.
        • Scholz P.
        • et al.
        Why rankings of biomedical image analysis competitions should be interpreted with care.
        Nat Commun. 2018; 9: 5217
        • Samala R.K.
        • Chan H.-P.
        • Hadjiiski L.
        • Koneru S.
        Hazards of data leakage in machine learning: a study on classification of breast cancer using deep neural networks.
        in: Medical Imaging 2020: Computer-Aided Diagnosis: International Society for Optics and Photonics. 2020: 1131416
      12. Mongan J, Moy L, Kahn CE. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiology: Artificial Intelligence. 2020;2:e200029.

        • Sun T.Q.
        • Medaglia R.
        Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare.
        Government Information Quarterly. 2019; 36: 368-383
        • Güngör H.
        Creating Value with Artificial Intelligence: A Multi-stakeholder Perspective.
        Journal of Creating Value. 2020; 6: 72-85
      13. Puaschunder JM. Stakeholder perspectives on Artificial Intelligence (AI), robotics and big data in healthcare: An empirical study. Stakeholder Perspectives on Artificial Intelligence (AI), Robotics and Big Data in Healthcare: An Empirical Study (December 3, 2019). 2019.

        • Lluch M.
        Healthcare professionals' organisational barriers to health information technologies-a literature review.
        Int J Med Inf. 2011; 80: 849-862
        • Ford E.W.
        • Menachemi N.
        • Peterson L.T.
        • Huerta T.R.
        Resistance is futile: but it is slowing the pace of EHR adoption nonetheless.
        J Am Med Inform Assoc. 2009; 16: 274-281
        • Collado-Mesa F.
        • Alvarez E.
        • Arheart K.
        The Role of Artificial Intelligence in Diagnostic Radiology: A Survey at a Single Radiology Residency Training Program.
        Journal of the American College of Radiology. 2018; 15: 1753-1757
        • Waymel Q.
        • Badr S.
        • Demondion X.
        • Cotten A.
        • Jacques T.
        Impact of the rise of artificial intelligence in radiology: What do radiologists think?.
        Diagn Interventional Imaging. 2019; 100: 327-336
        • Yang L.
        • Ene I.C.
        • Arabi Belaghi R.
        • Koff D.
        • Stein N.
        • Santaguida P.
        Stakeholders’ perspectives on the future of artificial intelligence in radiology: a scoping review.
        Eur Radiol. 2021;
        • Neville R.
        • Marsden W.
        • McCowan C.
        • Pagliari C.
        • Mullen H.
        • Fannin A.
        A survey of GP attitudes to and experiences of email consultations.
        Journal of Innovation in Health Informatics. 2004; 12: 201-205
        • Car J.
        • Sheikh A.
        Email consultations in health care: 1—scope and effectiveness.
        BMJ. 2004; 329: 435-438
        • Car J.
        • Sheikh A.
        Email consultations in health care: 2—acceptability and safe application.
        BMJ. 2004; 329: 439-442
        • Coppola F.
        • Faggioni L.
        • Regge D.
        • Giovagnoni A.
        • Golfieri R.
        • Bibbolino C.
        • et al.
        Artificial intelligence: radiologists’ expectations and opinions gleaned from a nationwide online survey.
        Radiol Med (Torino). 2021; 126: 63-71
        • Diaz O.
        • Guidi G.
        • Ivashchenko O.
        • Colgan N.
        • Zanca F.
        Artificial intelligence in the medical physics community: An international survey.
        Physica Medica: European Journal of Medical Physics. 2021; 81: 141-146
        • Kortesniemi M.
        • Tsapaki V.
        • Trianni A.
        • Russo P.
        • Maas A.
        • Källman H.-E.
        • et al.
        The European Federation of Organisations for Medical Physics (EFOMP) White Paper: Big data and deep learning in medical imaging and in relation to medical physics profession.
        Physica Medica: European Journal of Medical Physics. 2018; 56: 90-93
        • Ng K.H.
        • Wong J.H.D.
        A clarion call to introduce artificial intelligence (AI) in postgraduate medical physics curriculum.
        Phys Eng Sci Med. 2022;
      14. Lebcir R, Hill T, Atun R, Cubric M. Stakeholders' views on the organisational factors affecting application of artificial intelligence in healthcare: a scoping review protocol. BMJ Open. 2021;11:e044074-e.

        • Sullivan H.R.
        • Schweikart S.J.
        Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?.
        AMA J Ethics. 2019; 21 (E160 -6)
      15. WHO. Ethics and governance of artificial intelligence for health: WHO guidance. Available at: https://wwwwhoint/publications/i/item/9789240029200 Accessed at January 18, 2021.

      16. FDA. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. Available at: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device. Accessed at Dec 01, 2021.

        • Geis J.R.
        • Brady A.
        • Wu C.C.
        • Spencer J.
        • Ranschaert E.
        • Jaremko J.L.
        • et al.
        Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement.
        Insights Imaging. 2019; 10: 101
        • Dawson D.
        • Schleiger E.
        • Horton J.
        • McLaughlin J.
        • Robinson C.
        • Quezada G.
        • et al.
        Artificial Intelligence: Australia’s Ethics Framework.
        Commonwealth Scientific and Industrial Research Organisation. 2019;
      17. Davidson K. CIFAR and the French National Centre for Scientific Research (CNRS) establish CAD $1M research agreement. Available at: https://cifar.ca/cifarnews/2021/05/12/cifar-and-the-french-national-centre-for-scientific-research-cnrs-establish-cad-1m-research-agreement/. Accessed May 12, 2021. ].

      18. Commision E. France AI Strategy Report. Available at https://knowledge4policy.ec.europa.eu/ai-watch/france-ai-strategy-report_en. Accessed June 4, 2021.: European Commision; 2018.

      19. Centre for Data Ethics and Innovation Consultation. Available at: https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation/centre-for-data-ethics-and-innovation-consultation. Accessed June 5, 2021.

      20. James V. AI systems should be accountable, explainable, and unbiased, says EU. Available at: https://www.theverge.com/2019/4/8/18300149/eu-artificial-intelligence-ai-ethical-guidelines-recommendations. Accessed Apr 8, 2019.

        • Benjamens S.
        • Dhunnoo P.
        • Meskó B.
        The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database.
        npj Digital Med. 2020;3:118.;