top of page
  • Stephanie Kelley

How the EU Plans to Set the Gold Standard for AI Regulation, and What it Means for Organizations

Updated: Feb 21, 2020

Yesterday (February 19th, 2020) the European Commission released a White Paper on Artificial Intelligence, stating their plans to set standards for AI regulation globally. I provide a short recap of the document and discuss what it means for organizations using AI.

 

First things first, the White Paper is not new regulation from the EU. It is however a first step to future regulation, which by all accounts we can safely say won't be coming any sooner that 2021. It provides an update on the regulatory direction, among several other topics related to AI, sneaking in under the 100-day deadline of March 9th for the "proposed AI regulation" promised by the Commission's President Ursula von der Leyen.


The paper makes several broad claims about the EU's objectives on AI including hopes for upwards regulatory convergence, access to key resources including data, and a vision to create a level playing field for artificial intelligence for it's members. It hopes to use AI to capitalize on existing strengths in industrial and professional markets, and lead the next data wave expected with edge and quantum computing. They plan to use a two-pronged approach to build both an ecosystem of excellence and trust for AI in Europe, backed by a significant increase in investment to the tune of Є100 million.


The excellence ecosystem relies on several key pillars to achieve this: member state consultation, growing the research and innovation community, up-skilling, focusing on SMEs, partnering with the private sector, promoting AI adoption by the public sector, securing access to data and computing infrastructures, and working with key international groups like OECD, and the G20.


The trust ecosystem is single-minded and pushed the development of a regulatory framework for AI, based on the ethical values put forward by the EU High-Level Expert Group on Artificial Intelligence in 2019 (link):

  • Human agency and oversight

  • Technical robustness and safety

  • Privacy and data governance

  • Transparency

  • Diversity, non-discrimination and fairness

  • Societal and environmental well-being

  • Accountability


The White Paper even suggests this list of ethical guidelines may be transformed into a kind of AI ethics "curriculum" for developers, which could be made available for use by various training organizations. In the last year these same guidelines have been tested by over 350 organizations, who used the assessment list within the document and then provided feedback to the EU High-Level Expert Group. Finalized ethical guidelines are due back from the group in June 2020, with further investment promised (Є2.5 million) to finance a project to promote the finalized EU AI ethical guidelines and ensure common principles and operational frameworks are adopted across member states and like-minded partners.


In addition to the revised guidelines, the European Commission has clearly stated their intention for a European AI regulatory framework in order to increase technological adoption and build consumer and business trust in AI. Like the General Data Protection Regulation (GDPR), the commission has proposed the forthcoming AI Regulation apply to all AI-enabled products and services in the EU, regardless of whether they are established in the EU or not, meaning most major global organizations will be subject to the legislation. What is undecided is whether existing legislation can be enforced as is, adapted, or whether entirely new legislation is required. The White Paper does not clarify the timeline for the legislation review, but I expect something will appear in Q1 2021 based on the working timelines and comments from commission members. Until then, existing regulations apply (as they have since the technology was first introduced) including the General Data Protection Regulation, and the various non-discrimination directives and product safety laws.


Note that several researchers in this space have highlighted the potential inadequacies of these existing laws when applied to algorithmic decision-making, so expect to see at least some changes to existing legislation in the coming year or two. The commission themselves note several unique properties of AI that may induce legislation reform: lack of transparency, the changing functionality of AI, accountability of AI systems, cyber-security risks, and product safety challenges (which are discussed in detail in the accompanying Report on Safety and Liability implications of AI).


The biggest claim that could potentially impact organizations is the stance on stronger regulation for "high-risk" applications; those that are used in high-risk sectors (i.e. healthcare, transport, energy, and part of the public sector like migration and border controls) and are used in a manner that could generate risk (i.e. an appointment managing algorithm may be used in a "high-risk" hospital setting, but not used in a manner that could generate risk, whereas a treatment triaging algorithm likely would, and therefore be considered a "high-risk" application). These regulations will include stronger controls on training data, data governance and explainability, reporting, robustness and accuracy, and human oversight, with additional regulation proposed on remote biometric identification (i.e. identification of people at a distance using biometric identifiers such as iris, facial image, vascular patterns, etc.). The stronger regulation will likely be enforced through inspection or certification (potentially similar to Denmark's prototype Data Ethics Seal). In other words, organizations may be held to higher ethical standards and restricted in their use of inscrutable algorithms if using "high-risk" applications; a strong ethically-minded stance from the EU.


A voluntary labelling scheme has been proposed for applications not deemed "high-risk" (potentially in line with Malta's voluntary certification system for AI); that would allow labelling of applications as "trustworthy" if they adhere to the stronger regulations. While the labelling itself would be voluntary, the same enforcement mechanisms would be used to ensure compliance.


 

Organizational Action Item: Think about your current applications of AI and determine whether they could fall under the "high-risk" definition. Make sure your internal AI ethical policies are in line with the stronger regulation proposed by the EU.

 

An added bonus in the paper - the EU is one of the few groups that have put their foot down and provided a definition of artificial intelligence (on pg. 16 of the White paper and reproduced below), an important first step in regulatory reform. The definition is likely a bit lengthy for practical use by organizations, but the Commission has likely tried to cover as much legal ground as possible given the rapid evolution of the technology.


"Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.”


My personal working definition tends on the shorter and more practically minded side:


Artificial Intelligence: The ability of a machine to perform tasks commonly associated with intelligent human behaviours including, but not limited to learning from, and acting on information.




64 views0 comments

Comments


Post: Blog2_Post
bottom of page