Topic Overview AIDA


Committee on Artificial Intelligence in a Digital Age (AIDA)

Recent  advances  in  AI  systems  in  Medicine  and Healthcare present  extraordinary  opportunities  in many areas of social interest together with significant questions and drawbacks, calling for a close consideration of their implementation. What stance and/or steps should the EU take in the near-future applications of AI in this particular sector?

Chairperson: Nida Abraityte (LT)

INTRODUCTION 

Devices entailing Artificial Intelligence (AI) based software become increasingly embedded in healthcare and are poised to transform lives on a remarkable scale: receiving and processing more data helps to make better decisions for both physicians and patients, leading to more beneficial health outcomes. At the same time, access to such private and personal data as health information is a complex and sensitive matter that inevitably poses regulatory gaps and legal uncertainties. If poorly regulated it could erode public trust in AI, infringe privacy and data protection laws and cause discrimination. 

The healthcare sector is already one of the most heavily regulated, from doctors requiring licenses to equipment standards and rigorous clinical trials for new drugs. Therefore, problematic use of health information and bias built into algorithms should be regarded with the same caution,  keeping up with the same standards. Regulation in AI within healthcare will undoubtedly increase legal compliance, but it will also provide much-needed clarity and provide AI developers with the confidence they need to innovate and leverage AI in the healthcare sector to its maximum extent. 

KEY TERMS

  • Artificial intelligence (AI) is defined as the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AI technologies may potentially optimise the performances of devices in a continuous manner, autonomously, evolving in real time and adapting to its data input.
  • AI in healthcare refers to AI applications that are being used to detect disease, monitor its progression, model pharmacokinetics, optimise  treatment regimens, among other tasks. It is usually based on the processing of vast amounts of biometrical data through deep-learning algorithms.1Deep learning is a type of machine learning and artificial intelligence (AI) that imitates the way humans gain certain types of knowledge. While traditional machine learning algorithms are linear, deep learning algorithms are stacked in a hierarchy of increasing complexity and abstraction.
  • Software as a Medical Device (SaMD) – is defined by the International Medical Device Regulators Forum (IMDRF) as “software intended to be used for performing functions without being part of a hardware medical device.

MAIN ACTORS

  • Biotech-infotech start ups are public or private entities, led by scientists and engineers  that create AI-based technology. They merge the competences of biotechnologists with data and computer scientists to create a machine-learning based service or product. 
  • Companies and healthcare institutions are the main adopters and beneficiaries of the newest technologies. However, just as producers, the users of the AI products are responsible for ensuring compliance with fundamental values and rights of their patients and clients.
  • National governments are made of decision-makers that determine the conditions based on which the AI startups are being launched and operated in each country, and set the regulations that companies, healthcare institutions and other entities have to comply with in order to adopt AI-based technologies.
  • Citizens, namely patients that may be a subject to  AI services applied in healthcare, have a right to be aware of the extent their data is being used to, stored and what implications the outcomes of health information analysis has on their day-to-day life. They can stir the direction of the field by voting on policies concerning AI and advocating for their rights.
  • The European Commission, in this field, namely the Directorate-General on Communications Network, Content and Technology, proposes new EU laws and policies, monitors their implementation, and sets the framework under which other actors are integrating ethical rules from the very first stages of development to the implementation of AI-operated devices. 

MEASURES IN PLACE

The Commission kicked off its AI rule-making in 2018, with the publication of a European Strategy on AI, which plowed €1.5 billion into research. Then in 2019, the High-Level Expert Group on AI, set out ethical guidelines for trustworthy AI. This was followed in 2020 by the White Paper on AI – a regulatory framework outlining approaches that Europe needs to take to become a global leader in innovation in AI and its applications. In April 2021, the European Commission produced an ARTIFICIAL INTELLIGENCE ACT (AIA), the legal framework on AI, that aims to accelerate and align AI policy priorities and investments across Europe. It should also allow Europe to develop human-centric, sustainable, secure, inclusive and trustworthy AI, while boosting technological development and respecting privacy laws. Healthcare AI applications according to the Proposal of a Regulation harmonizing rules on Artificial Intelligence would generally fall into the high-risk category and would need to fulfill the following criteria to achieve regulatory approval: adequate risk mitigation systems, high quality of the datasets feeding the system, logging of activity to ensure traceability of results, transparency for compliance assessment by authorities, understandability to the user, appropriate human oversight, robustness, security and accuracy.

According to current regulations, companies from within or outside the bloc could be fined up to EUR 30,000,000 or up to 6% of total worldwide annual turnover, for infringements related to non-compliance. To indicate conformity with the proposed regulation before hitting the market, high-risk AI systems would need to obtain a specific CE marking, after performing a conformity assessment procedure by the notified body (designated under the proposed regulation) or the manufacturer himself based on EU declaration of conformity. However, apart from regulating and constraining the EU wants to promote the development and the uptake of AI in healthcare, thus the European Commission is calling for more Research and Development (R&D), including AI “excellence and testing centres”.

To ensure that the regulations can adapt to emerging applications, the proposal would empower the Commission to expand this list of high-risk areas without going through a legislative process. The AIA proposal follows the ordinary legislative procedure and will now be debated in parallel in the European Parliament (EP) and Council well into 2022. AIA will apply to providers of AI systems (e.g. a biotech-infotech startups), as well as users of such AI systems (e.g. clinics providing analysis of biometric data). Moreover, the rules carve out an exception allowing authorities to use the biometric data if they’re fighting serious crime, for example, facial recognition technology from CCTV cameras or genetic databases to find sex offenders. 

The AIA also proposes creating a European Artificial Intelligence Board, comprising one representative per EU country, the EU’s data protection authority, and a European Commission representative. The board will supervise the law’s application and share best practices. 

KEY CONFLICTS

According to the 2020 survey by McKinsey, less than a half of companies are aware and compliant to the new regulations, which is alarming considering the instances when the uncontrolled AI has gone awry – part of the reason why Europe’s General Data Protection Regulation (GDPR) exists. Thus, technological advancement and active risk mitigation have to be pursued simultaneously. It is argued that to ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities of the Member States ought to establish AI regulatory sandboxing2“Sandboxes” allow innovators to trial these products, services and business models in a safe space, to confirm their compliance with existing regulation before implementing them within the wider sector. schemes that would facilitate the development and testing of innovative AI systems under strict regulatory oversight before being put into service.

Bias occurs when unrepresentative training data is given to an AI algorithm to help it learn and can have especially detrimental outcomes – skewing data towards a particular group, omitting certain demographic or socio-economic groups. However, at the moment there are no official regulatory guidelines around the data used to train the AI in healthcare. Therefore, it is argued that a far more transparent process is needed, similar to clinical trials, in which manufacturers engage in full and frank disclosure showing the attributes of training data, rather than asking  data to be  “relevant, representative, free of errors and complete”, as mentioned in the act,  because data never represents reality perfectly

Another problem in the field is that salaries in AI have become astronomical elsewhere in recent years making only large firms and not the startups being able to keep talents, thus impeding knowledge transfer and training of new AI researchers in Europe. The challenge for the EU, which is home to only six of the top 100 AI startups worldwide—will be to develop adaptable regulations that facilitate rather than impede positive innovation and the uptake of AI. Policymakers will need to enact appropriate safeguards, without stifling innovation that will enable a myriad of future public benefits.

FOOD FOR THOUGHT

Even though the EU attempts to catch up with the development of AI, regulations still have some gaps and things to keep in mind moving forward. It is crucial to get everyone, both the producers as well as the users of the AI in healthcare, on board with the most recent regulations. It’s important to consider that different legal frameworks regulating AI and data in healthcare are barriers to the realisation of the potential of AI. MedTech Europe calculated that clearing the barriers could save 400,000 lives a year and free up labor equal to the work time of 500,000 healthcare professionals, therefore getting this out of the way would make the bloc’s maneuvers in the AI field so much more efficient. Although the EU is not home to the largest AI companies worldwide it has a potential to be the leader at setting the standard for a trustworthy AI for the rest of the world as it did with the GDPR. 

  • While upholding our AI creation and applications to the highest standard, how can Europe, apart from investing more money into the sector, promote the development and the up-scaling of AI in the region?
  • How can the EU ensure that Member States share healthcare data to catalyse the development of AI-based healthcare solutions while upholding data protection?
  • How can the EU make sure that by the end of 2022, when the latest regulations will come into force, all of the stakeholders will be aware of and ready to implement the changes in their operations?
  • What should the EC do to ensure more transparency about the data that the AI is being trained on to ensure fairness and absence of discrimination?

LINKS FOR FURTHER RESEARCH