LIBE 2

MOTION FOR A RESOLUTION BY THE COMMITTEE ON CIVIL LIBERTIES, JUSTICE AND HOME AFFAIRS 2

A major concern of machine learning algorithms, is that they might perpetuate bias that was already in the data used to set up the algorithm. As such, to what extent should the EU intervene to ensure the fair and equal treatment of all its citizens, whilst considering the complex nature and ambiguity of many of these algorithms?

Submitted by: Dure Afroz (NL), Amélie Beenhakkers (NL), Laura Dominicy (LU), Danielle Kok (NL), Juliëtte Kok (NL), Áron van der Meer (NL), Finn Russell (NL), Hayat Solmaz (TR), Raphael Tsiamis (Chairperson, GR)

The European Youth Parliament,

  1. Convinced of the connection between AI bias1 and existing social prejudices due to the skewed representation of socio-economic groups in AI training data,
  2. Noting the direct correlation between the limited representation of minorities in AI training data and the lack of diversity in the field of AI development and applications,
  3. Emphasising the significant social repercussions of biased AI systems on the inclusivity and the proper functioning of governmental duties such as healthcare systems and public administration,
  4. Alarmed by the limited human oversight of the output of automated decision-making by actors developing and implementing AI,
  5. Recognising the lack of transparency in the output of automated decision-making as a result of the complexity of AI algorithms,
  6. Aware of the policy challenge of regulating AI technologies due to the rapid development and intricacy of machine-learning processes,
  7. Taking into account Member States’ desire to facilitate AI innovation through a preference for soft law2 measures on ethical AI over direct regulation,
  8. Noting with concern that the trade secrecy policies among AI companies regarding their algorithms result in:
    1. lack of transparency,
    2. unwillingness for cooperation,
  9. Concerned by the limited action of companies developing and applying AI regarding its ethical implementation and potential discrimination,
  10. Disappointed by the limited investment in ethical AI systems due to the perception of socially responsible practices as not profitable;


Bias in AI Development

  1. Encourages companies developing AI technologies to actively combat bias in algorithms by:
    1. working towards a more equal representation of socio-economic groups in data sets used in the development of AI algorithms,
    2. testing the implementation of AI products on a more diverse range of training groups before releasing them in the market, 
    3. setting up departments specifically tasked with monitoring the ethical implementation of their AI algorithms and researching potential misrepresentation of minorities,
    4. providing data to surveys on AI bias by international organisations researching ethical AI;
  2. Suggests that Member States promote the engagement of minority students in the development and implementation of AI through:
    1. scholarships funded by Erasmus+3,
    2. public awareness campaigns about the need for diversity in AI,
    3. educational programmes in schools developed by National Ministries; 
  3. Designates the Directorate-General for Communications Networks, Content and Technology (DG CONNECT4) to expand the responsibilities of the High-Level Expert Group on AI (AI HLEG5) to include:
    1. auditing and approving datasets used in artificial intelligence projects under a non-disclosure agreement6,
    2. supporting European AI companies in creating more diverse and representative datasets for AI projects,
    3. ensuring the compliance of AI projects with the Ethics Guidelines for trustworthy AI7,
    4. proposing policy updates to the European AI strategy on a biannual basis,
    5. supplementing European AI companies in the detection of vulnerabilities and bias in their AI systems;

Supervision of AI

  1. Instructs Member States to reduce their AI dependence in areas identified as ‘high-risk’8 by requiring human supervision of any automated decision-making;
  2. Endorses Member States to continue supporting AI innovation in areas not covered by the shared EU competences through the national promotion of the Ethics guidelines for trustworthy AI;

Ethical responsibility of companies

  1. Recommends that the Directorate-General for Justice and Consumers9 promote socially responsible company policies for ethical AI by:
    1. subsidising European companies developing AI in accordance with the Ethics guidelines for trustworthy AI,
    2. funding workplace training on ethical AI,
    3. issuing a European certification label for companies adhering to principles of ethical AI;
  2. Asks the DG CONNECT to increase the transparency and reduce the vulnerabilities of AI systems by funding research in explainable artificial intelligence10.

Footnotes:

  1. AI bias or algorithm bias is a phenomenon that occurs when an algorithm produces systemically prejudiced results due to erroneous assumptions or data biases in the machine learning process.
  2. The term ‘soft law’ refers to non-binding legal instruments, such as subsidies, which aim to incentivise stakeholders towards a set goal instead of regulating their actions through specific measures.
  3. Erasmus+ is the EU’s programme for supporting growth, employment, and social inclusion in Europe through a focus on education and training for the youth.
  4. The Directorate-General for Communications Networks, Content and Technology (DG CONNECT) is the department of the Commission responsible for developing and implements policies to make Europe fit for the digital age.
  5. The High-Level Expert Group on AI (AI HLEG) is a group of 52 AI experts working under DG CONNECT to advise the implementation of the European AI Strategy.
  6. A non-disclosure agreement is a legal contract that renders confidential the dissemination of specified information shared between two or more parties.
  7. The document ‘Ethics guidelines for trustworthy artificial intelligence’ was prepared by the AI HLEG to improve the quality, safety, and trustworthiness of AI.
  8. The ‘high-risk’ areas in AI are healthcare, transport, police, recruitment, and the legal system, as considered in the position paper submitted in 2020 by 14 Member States as a response to legislative initiatives on AI by the European Commission.
  9. The Directorate-General for Justice and Consumers is the department of the European Commission responsible for European policy on justice, consumer rights.
  10. Explainable AI refers to methods and techniques in the application of AI that enable human experts to understand the output of automated decision-making.

Executive Summary

Resulting from skewed input data, flawed target variables, or biased trainings implemented into an algorithm’s design, AI bias embeds real-life discrimination and human rights violations into Artificial Intelligence, thus damaging consumers’ trust as well as the future of innovation. To strengthen the EU’s ethical technological development, it is critical to develop human-centric AI frameworks aligned with the EU’s humanitarian values. While AI legislation has been developing rapidly, it is encountering significant points of contention in terms of technological innovation and transparency, as well as an unwillingness for binding regulations by the private sector. Ultimately, with biased AI often reinforcing existing social problems, it is important for European legislators and companies alike to critically examine and rectify potential social repercussions of automated decision-making.

Core Concepts

big data, data mining, Machine Learning (ML), deep learning, Artificial Intelligence (AI), Bias (implicit; data; AI), black-box metaphor, explainability.

Introduction

‘To become a global leader in innovation in the data economy and its applications’ stands out as the primary target of the European data strategy. As a key point of the EU’s digital future, AI is attracting significant attention due to not only its promising technological contributions, but also its uncontrolled development. Despite the multidisciplinary benefits of AI, several experts and human rights advocates are concerned about instances of discriminatory and biased behaviour by AI systems against persons of a particular gender, race, socio-economic background or sexual orientation, with allegations targeting local enterprises and leading brands alike over the years. For instance, Google Vision Cloud reportedly practised discrimination against people of colour by labelling images differently depending on skin colour, as either an ‘electronic device’ for white users, or a ‘gun’ for black ones. 

With similar allegations against Amazon’s recruiting system for discrimination against women, and even against the US healthcare system for favouring white over black patients, AI bias is seen to magnify existing discrimination in society, further disadvantaging underrepresented groups. Thus, the EU needs to follow a human-centric approach focused on protecting human rights, in order to defend social equality and restore citizens’ trust in AI systems.

Stakeholders and Legal Competences

The European Commission (EC) is the executive arm of the EU, responsible for proposing legislation and managing EU policy. For matters related to AI and digital technologies in general, the EC acts through its Directorate-General for Communications Networks, Content and Technology (DG CONNECT), the department responsible for the EU’s Digital Agenda and the Digital Single Market.

As a relatively new and rapidly growing policy area, AI, along with its socio-economic implications, lies at the intersection of various shared EU competence areas, i.e., research and development (R&D), consumer protection, justice and security, and the single market. This means that the EC and Member States can both initiate legislations, with EU laws taking precedence. It is especially important for Member States to create their own legislation on areas adjacent to AI bias but not covered by the shared EU competences, such as national health systems.

The European Parliament (EP) is the legislative branch of the EU, responsible for debating the legislative proposals of the EC, particularly through its IMCO, ITRE, JURI and LIBE committees, which regularly discuss AI and its implications, alongside the recently established a Special Committee on Artificial Intelligence in a Digital Age (AIDA). With its own initiative reports, the EP can also formally request that the EC bring forward legislative proposals on topics of shared EU competence.

The European Economic and Social Committee (EESC) is an EU advisory body that unites various economic interest groups of the Internal Market. In its advisory capacity, the EECS offers opinions on the EC’s policy frameworks and puts forward proposals regarding the Digital Single Market and its stakeholders.  

The High-Level Expert Group on Artificial Intelligence (AI HLEG) is a team of 52 experts appointed by the EC and tasked with implementing AI-related strategic plans and policy recommendations.

The Council of Europe (CoE) stands out as one of the leading international organisations for human rights. Its Ad Hoc Committee on Artificial Intelligence (CAHAI) is responsible for addressing prominent AI-related concerns and drafting a humane framework regarding the development of AI. 

Non-Governmental Organisations (NGOs) and researchers on human rights play a crucial role in monitoring the implementation of AI regulations policies regarding ethical and social issues, as well as influencing the legislative process through networks such as the EU Agency for Fundamental Rights (FRA) and the Human Rights and Democracy Network (HRDN).

Legal Framework

Key Conflicts

conflicts

The Human Factor

As captured in the words of EC President Ursula von der Leyen, ‘the algorithm is as smart as the data you feed it,’ meaning that the rapid development of AI does not free it from the implicit biases of its human creators. AI bias mirrors the opinions and prejudices in society, inheriting existing biased practices through poorly selected, unconsciously or intentionally biased training data. With human input being a necessary part of AI research, unbiased algorithms need unbiased data sets that address real-life discrimination. 

Algorithm experts suggest that this issue runs even deeper than simple human bias, with groups facing AI discrimination often being significantly underrepresented in certain key environments, such as women in technological research groups, and largely overrepresented in others, such as people of colour in the American prison system. With the quantity and diversity of data playing a large role in the eventual workings of an AI algorithm, these systems end up being adversely influenced by extraneous factors, such as the subject’s race and socio-economic status, thus reinforcing existing social inequalities. This self-reinforcing effect is built into the way AI and ML work, thus posing a continuous challenge to ethical AI usage.

Transparency and Accountability vs. Trade Secrecy and AI Innovation

Alongside the human factor being critical in the development of AI, the algorithms themselves are also major deciding factors, with some being more bias-prone than others. Thus, the transparency of AI systems and their training datasets is vital in detecting human rights violations. While advocates of AI transparency insist that publishing the source code and training data of critical AI algorithms need not harm companies and may be essential for identifying high-risk issues, multiple AI developers contend that increased transparency requirements may facilitate replication, thus reducing innovation and investment incentives. 

Evidently, corporate secrecy laws, along with the pre-existing black-box intricacy of AI algorithms, render systems unaccountable and obstruct the assessment of bias and the correction of errors, thus being seen as stymying progress. On the other end, opponents of explainability argue that it requires a decrease in complexity, thus sacrificing efficacy and innovation, despite the notable progress made by ethical AI actors. Since AI algorithms rely on highly complex models and data points that evolve with time, even waiving trade secrecy and fully auditing these systems might not yield a complete solution. 

Self-Regulation vs. Government Intervention

At the same time, the internal opaqueness of AI is amplified by a lack of credible external evaluation, whereby consumers and users of AI systems are often unable to understand beforehand when and to what extent an algorithm’s decision is biased, despite efforts at voluntary labelling. Ultimately, the shifting balance between transparency and competitive advantages for businesses remains a prominent challenge in today’s AI technology that creates highly branched and complex issues.

With competitive market forces in the AI primarily driving the search for new applications and markets, rather than incentivising non-discrimination, experts are increasingly concerned that self-regulation may not be enough to tackle the ethical challenges posed by the development of AI and ensure accountability. Thus, various NGOs have called for increasing regulatory and supervisory power to government agencies, with the World Economic Forum issuing guidelines for such audits, especially relating to facial recognition AI. Even then, the challenge would be shifted to creating national or EU-wide AI safety guidelines and certification models that meet nuanced sectoral expertise requirements to enable innovation, a task that has historically troubled lawmakers.

Policy Options ahead

The White Paper on Artificial Intelligence by the EC offers policy options for a future EU regulatory framework for relevant AI actors, with a particular focus on high-risk applications. 

Following the EC report on human-centric AI, the EESC proposed an AI trustworthiness certification scheme, to be issued and supervised by an independent EU agency. The CoE also recommended recently a certification mechanism for AI tools used in the justice and judiciary system.

More than 14 Member States have indicated a strong preference for ‘soft law’ solutions and self-regulation to most AI systems, with the exception of technologies considered ‘high-risk,’ such as facial recognition and healthcare.

Finally, the EU Mutual Learning Programme in AI Gender Equality of the EC recommended alongside the EESC human resource trainings and knowledge-sharing to understand the extent of AI usage and develop best practices against discrimination.

Food for Thought

While the European Union strives to become a global leader in innovation, it needs an inseparable human-centric approach to preserve its own values within the ever-growing AI race. With AI bias damaging consumers’ trust and backsliding on fundamental human rights of non-discrimination, the EU needs to further develop its strategies on the ethical aspects of Artificial Intelligence. In examining the various dimensions of these strategies, you can consider the following Key Questions:

Taking into account the ‘black-box metaphor’ for the intricacy of AI systems, how can the EU accurately evaluate the effectiveness of AI algorithms for minority users and measure an increase or decrease in AI bias?
Considering the complexity of AI algorithms, can the EU create regulatory frameworks for more diverse and representative AI training data sets and if yes, should it do so?
How can the EU balance transparency frameworks for AI algorithms with freedom for innovation and investment?
In light of the current discriminatory nature of various AI systems, should the EU consider limiting their contribution to important decision-making processes, such as health policy, law enforcement, and the labour market?  
How can the EU promote collaboration across stakeholders on technological development as well as human rights, connecting equality monitoring bodies with actors designing and utilising of AI?

Essential Reading

‘AI rules: what the European Parliament wants’, Press Release (text) by the European Parliament (2020). Link. An exposition of recent EU legislation on AI with a focus on future directions and links related fields.

‘Doing AI, the EU way: Coded Bias Trailer’, Short film (video) by the EU Agency for Fundamental Rights (2020). Link. A short film on AI bias and the EU’s fight to combat it through AI regulation.

‘Parliament leads the way on first set of EU rules for Artificial Intelligence’, Press Release (text) by the JURI committee of the European Parliament (2020). Link and Full Text. An exposition of the legislative initiative of the European Parliament requesting a new legal framework outlining ethical principles and legal obligations for AI.

‘EU struggles to go from talk to action on artificial intelligence’, Opinion piece (text) by Science|Business (2020). Link. A critical perspective on the EU’s policy on AI discussing the challenge of balancing human rights and innovation, as well as future directions for the EU.

‘How I am fighting bias in algorithms’, TEDx Talk (video) by Joy Buolamwini, 2018. Link. A talk by an expert in Computer Science and AI discussing their experience with AI bias and the fight against it.

‘EU to unveil proposed regulations for artificial intelligence’, News Report (video) by Al Jazeera News, 2020. Link. A summary of recent EU press releases and strategy drafts on regulating AI and limiting algorithm bias.

Further Reading

Here is a curated Mix collection with articles, research papers, and legal documents about EU legislation and current debates on AI bias.

Here is a YouTube playlist with videos and podcasts offering various perspectives on AI and how it may propagate data biases.