Communication on Artificial Intelligence
The EC outlines EU policy on AI for the first time.
Resulting from skewed input data, flawed target variables, or biased trainings implemented into an algorithm’s design, AI bias embeds real-life discrimination and human rights violations into Artificial Intelligence, thus damaging consumers’ trust as well as the future of innovation. To strengthen the EU’s ethical technological development, it is critical to develop human-centric AI frameworks aligned with the EU’s humanitarian values. While AI legislation has been developing rapidly, it is encountering significant points of contention in terms of technological innovation and transparency, as well as an unwillingness for binding regulations by the private sector. Ultimately, with biased AI often reinforcing existing social problems, it is important for European legislators and companies alike to critically examine and rectify potential social repercussions of automated decision-making.
, data mining, , , , Bias ( ; ), , .
‘To become a global leader in innovation in the data economy and its applications’ stands out as the primary target of the European data strategy. As a key point of the EU’s digital future, AI is attracting significant attention due to not only its promising technological contributions, but also its uncontrolled development. Despite the multidisciplinary benefits of AI, several experts and human rights advocates are concerned about instances of discriminatory and biased behaviour by AI systems against persons of a particular gender, race, socio-economic background or sexual orientation, with allegations targeting local enterprises and leading brands alike over the years. For instance, Google Vision Cloud reportedly practised discrimination against people of colour by labelling images differently depending on skin colour, as either an ‘electronic device’ for white users, or a ‘gun’ for black ones.
With similar allegations against Amazon’s recruiting system for discrimination against women, and even against the US healthcare system for favouring white over black patients, AI bias is seen to magnify existing discrimination in society, further disadvantaging underrepresented groups. Thus, the EU needs to follow a human-centric approach focused on protecting human rights, in order to defend social equality and restore citizens’ trust in AI systems.
The European Commission (EC) is the executive arm of the EU, responsible for proposing legislation and managing EU policy. For matters related to AI and digital technologies in general, the EC acts through its Directorate-General for Communications Networks, Content and Technology (DG CONNECT), the department responsible for the EU’s Digital Agenda and the .
As a relatively new and rapidly growing policy area, AI, along with its socio-economic implications, lies at the intersection of various shared EU competence areas, i.e., research and development (R&D), consumer protection, justice and security, and the single market. This means that the EC and Member States can both initiate legislations, with EU laws taking precedence. It is especially important for Member States to create their own legislation on areas adjacent to AI bias but not covered by the shared EU competences, such as national health systems.
The European Parliament (EP) is the legislative branch of the EU, responsible for debating the legislative proposals of the EC, particularly through its IMCO, ITRE, JURI and LIBE committees, which regularly discuss AI and its implications, alongside the recently established a Special Committee on Artificial Intelligence in a Digital Age (AIDA). With its own initiative reports, the EP can also formally request that the EC bring forward legislative proposals on topics of shared EU competence.
The European Economic and Social Committee (EESC) is an EU advisory body that unites various economic interest groups of the Internal Market. In its advisory capacity, the EECS offers opinions on the EC’s policy frameworks and puts forward proposals regarding the Digital Single Market and its stakeholders.
The High-Level Expert Group on Artificial Intelligence (AI HLEG) is a team of 52 experts appointed by the EC and tasked with implementing AI-related strategic plans and policy recommendations.
The Council of Europe (CoE) stands out as one of the leading international organisations for human rights. Its Ad Hoc Committee on Artificial Intelligence (CAHAI) is responsible for addressing prominent AI-related concerns and drafting a humane framework regarding the development of AI.
Non-Governmental Organisations (NGOs) and researchers on human rights play a crucial role in monitoring the implementation of AI regulations policies regarding ethical and social issues, as well as influencing the legislative process through networks such as the EU Agency for Fundamental Rights (FRA) and the Human Rights and Democracy Network (HRDN).
The EC outlines EU policy on AI for the first time.
The EC encourages the development and innovation of AI systems.
Soon after, some Member States launched their own AI strategies, with Germany and Finland outlining measures for AI ethics, whereas France prioritised explainability and social acceptance.
A legislative package launched by the EC that calls for European values at the heart of trustworthy AI design and implementation, as well as closer monitoring of sustainability and human rights requirements in AI.
One of the two reports, alongside Policy and Investment Recommendations, presented by AI HLEG regarding human rights in the development and implementation of lawful, trustworthy AI systems.
A legislative framework by the European Parliament Research Service that elaborates on ethical rules for the development and usage of AI systems in the EU. It also recommends further EU legislation on AI ethics.
A policy draft by the EC, following the recommendations of AI HLEG. This White Paper outlines the European approach to trustworthy AI by proposing measures that will streamline research, foster collaboration between Member States, and increase AI investment.
A policy framework released by the EC in collaboration with the EESC. Its aim is to make the EU a leader in a data-driven society. In the realm of AI, it focuses on the open-source sharing of algorithms and training data to foster innovation.
Using its Own Initiative Report mechanism, the EP requested EU action to produce a legal framework for ethical AI and a civil liability regime for AI. Together, these legislative initiatives aim to motivate the development of ethical AI and hold public and private stakeholders accountable for AI-related discrimination.
As captured in the words of EC President Ursula von der Leyen, ‘the algorithm is as smart as the data you feed it,’ meaning that the rapid development of AI does not free it from the implicit biases of its human creators. AI bias mirrors the opinions and prejudices in society, inheriting existing biased practices through poorly selected, unconsciously or intentionally biased training data. With human input being a necessary part of AI research, unbiased algorithms need unbiased data sets that address real-life discrimination.
Algorithm experts suggest that this issue runs even deeper than simple human bias, with groups facing AI discrimination often being significantly underrepresented in certain key environments, such as women in technological research groups, and largely overrepresented in others, such as people of colour in the American prison system. With the quantity and diversity of data playing a large role in the eventual workings of an AI algorithm, these systems end up being adversely influenced by extraneous factors, such as the subject’s race and socio-economic status, thus reinforcing existing social inequalities. This self-reinforcing effect is built into the way AI and ML work, thus posing a continuous challenge to ethical AI usage.
Alongside the human factor being critical in the development of AI, the algorithms themselves are also major deciding factors, with some being more bias-prone than others. Thus, the transparency of AI systems and their training datasets is vital in detecting human rights violations. While advocates of AI transparency insist that publishing the source code and training data of critical AI algorithms need not harm companies and may be essential for identifying high-risk issues, multiple AI developers contend that increased transparency requirements may facilitate replication, thus reducing innovation and investment incentives.
Evidently, corporate secrecy laws, along with the pre-existing intricacy of AI algorithms, render systems unaccountable and obstruct the assessment of bias and the correction of errors, thus being seen as stymying progress. On the other end, opponents of argue that it requires a decrease in complexity, thus sacrificing efficacy and innovation, despite the notable progress made by ethical AI actors. Since AI algorithms rely on highly complex models and data points that evolve with time, even waiving trade secrecy and fully auditing these systems might not yield a complete solution.
At the same time, the internal opaqueness of AI is amplified by a lack of credible external evaluation, whereby consumers and users of AI systems are often unable to understand beforehand when and to what extent an algorithm’s decision is biased, despite efforts at . Ultimately, the shifting balance between transparency and competitive advantages for businesses remains a prominent challenge in today’s AI technology that creates highly branched and complex issues.
With competitive market forces in the AI primarily driving the search for new applications and markets, rather than incentivising non-discrimination, experts are increasingly concerned that self-regulation may not be enough to tackle the ethical challenges posed by the development of AI and ensure accountability. Thus, various NGOs have called for increasing regulatory and supervisory power to government agencies, with the World Economic Forum issuing guidelines for such audits, especially relating to facial recognition AI. Even then, the challenge would be shifted to creating national or EU-wide AI safety guidelines and certification models that meet nuanced sectoral expertise requirements to enable innovation, a task that has historically troubled lawmakers.
The White Paper on Artificial Intelligence by the EC offers policy options for a future EU regulatory framework for relevant AI actors, with a particular focus on high-risk applications.
Following the EC report on human-centric AI, the EESC proposed an AI trustworthiness certification scheme, to be issued and supervised by an independent EU agency. The CoE also recommended recently a certification mechanism for AI tools used in the justice and judiciary system.
More than 14 Member States have indicated a strong preference for solutions and self-regulation to most AI systems, with the exception of technologies considered ‘high-risk,’ such as facial recognition and healthcare.
Finally, the EU Mutual Learning Programme in AI Gender Equality of the EC recommended alongside the EESC human resource trainings and knowledge-sharing to understand the extent of AI usage and develop best practices against discrimination.
While the European Union strives to become a global leader in innovation, it needs an inseparable human-centric approach to preserve its own values within the ever-growing AI race. With AI bias damaging consumers’ trust and backsliding on fundamental human rights of non-discrimination, the EU needs to further develop its strategies on the ethical aspects of Artificial Intelligence. In examining the various dimensions of these strategies, you can consider the following Key Questions:
‘AI rules: what the European Parliament wants’, Press Release (text) by the European Parliament (2020). Link. An exposition of recent EU legislation on AI with a focus on future directions and links related fields.
‘Doing AI, the EU way: Coded Bias Trailer’, Short film (video) by the EU Agency for Fundamental Rights (2020). Link. A short film on AI bias and the EU’s fight to combat it through AI regulation.
‘Parliament leads the way on first set of EU rules for Artificial Intelligence’, Press Release (text) by the JURI committee of the European Parliament (2020). Link and Full Text. An exposition of the legislative initiative of the European Parliament requesting a new legal framework outlining ethical principles and legal obligations for AI.
‘EU struggles to go from talk to action on artificial intelligence’, Opinion piece (text) by Science|Business (2020). Link. A critical perspective on the EU’s policy on AI discussing the challenge of balancing human rights and innovation, as well as future directions for the EU.
‘How I am fighting bias in algorithms’, TEDx Talk (video) by Joy Buolamwini, 2018. Link. A talk by an expert in Computer Science and AI discussing their experience with AI bias and the fight against it.