I'm thinking about the concept of overlap in the governance of AI and automated decision-making (ADM). I've been reading Europe's new proposed AI regulation (let's call it the Proposed Act), and also delving into the deliberative process of policy-making that lead up to it.
What I want to do here, with no particular instrumental goal in mind, is loosely map the concept of overlap onto the new proposed regulation and its interactions with other bodies of law and governance.
One manifestation of overlap in the Proposed Act has to do with constraining powers granted by regulation by way of overlapping regulatory schemes. The Proposed Act interacts with GDPR, the EU Law Enforcement Directive and the EU Charter of Fundamental human rights to constrain the use in law enforcement of real time biometric ID systems in public spaces. The Proposed Act prohibits this technology, but with carve outs, allowing surveillance for various kinds of urgent law enforcement or security purposes like finding missing people, preventing terrorist attacks, finding suspects of serious crimes. The carve out contemplates a field of permissible use, and then the various other regulatory pieces further narrow that field. So the different regulatory instruments act as overlapping constraints, narrowing the field of state power.
Then there are the overlaps that arise by virtue of the EU's federal character. The Proposed Act has both direct and indirect effects on member states. If or when it becomes law, member states are expected to implement the Proposed Act, giving effect to its provisions in domestic laws. But the Proposed Act also gives overarching supervisory powers to EU bodies to complement national initiatives, particularly regarding enforcement. A European Artificial Intelligence board will be tasked with coordinating national supervisory authorities.
Participants in the AI supply chain have overlapping responsibilities. The main subject of regulation is providers of high-risk AI systems, but for many such systems, the same obligations that apply to providers also apply to manufacturers of products in which AI systems are embodied. Importers take on conformity assessment responsibilities of providers. And distributors are considered providers where they place high-risk systems on the market under their own name.
The Proposed Act contemplates a set of overlapping measures to manage risks arising from AI, ranging from conformity assessment, to risk management protocols, data governance obligations, and post market surveillance of the safety impacts of AI systems.
The effect seems to be to reduce the surface area of risk, and the Proposed Act then contemplates the management of 'residual risk'.
The risk-based character of the Proposed Act creates a kind of cascade of control starting with prohibition of highest risk systems, followed by a tier of risk-management for high-risks, and then measures like notification and acceptability assessment for 'residual risks'.
It is not as though there is no further recourse for the residual risks that remain after all compliance obligations in the Proposed Act have been fulfilled. The information and practice generated by the controls in the proposed act facilitates the application of other bodies of law, and other remedies for victims of harm.
The European Commission's Report on the Safety and LIability Implications of AI (and connected White Paper) conceive AI Safety regulation as a tool for bridging gaps in existing, already interacting legislation. As the report notes, Product Liability, Product Safety and Civil Liability regimes already operate in concert, along multiple dimensions, to reduce risks of harm (and also create remedies for victims of harm):
At Union level, product safety and product liability provisions are two complementary mechanisms to pursue the same policy goal of a functioning single market for goods that ensures high levels of safety, i.e. minimise the risk of harm to users and provides for compensation for damages resulting from defective goods.
At national level, non-harmonised civil liability frameworks complement these Union rules by ensuring compensation for damages from various causes (such as products and services) and by addressing different liable persons (such as owners, operators or service providers).
While optimising Union safety rules for AI can help avoiding accidents, they may nevertheless happen. This is when civil liability intervenes. Civil liability rules play a double role in our society: on the one hand, they ensure that victims of a damage caused by others get compensation and, on the other hand, they provide economic incentives for the liable party to avoid causing such damage. Liability rules always have to strike a balance between protecting citizens from harm while enabling businesses to innovate.
Record keeping and data governance measures in the Proposed Act not only open the door for the application of Product Liability, Product Safety laws, but also for:
administrative and judicial review of automated or partly automated decisions by agencies of the state;
the development of standards to facilitate implementation (and perhaps in the longer run be incorporated by reference into the compliance obligations)
private suits in negligence, nuisance, equity, contract with the wide array of penalties and remedies each offers;
market-based governance through, for example, advocacy of Ethical and Sustainable Governance investment funds and reporting and rating bodies like MSCI;
even simpler kinds of governance by competition - so that, for example, data portability rights become meaningful as individuals find themselves better informed and readier to vote with their feet;
the development of insurance markets that efficiently allocate costs of risk management and harm compensation; and
regulatory intervention by agencies empowered under consumer law, discrimination law and other public interest doctrines.
All this overlap might perhaps raise the question, why do we need to fall back on other regimes? Can't we just regulate AI end to end? Why leave residual risks un-regulated at all?
The answer is that end-to-end regulation is too costly and too cumbersome. More interesting to me is the fact that different regulatory mechanisms and governance tools (like those in the list above) represent a plurality of values and interests interests. They are therefore open to dealing with risks and opportunities in a way that a uni-dimensional safety-based regulation is not.
Product-safety compliance and risk-management governance tools like the Proposed Act have the advantage of creating certainty and predictability when it comes the the highest risk activities. But the relationship of those doing the risky things is primarily with regulatory authorities, and secondarily with other participants in the AI supply chain. The mandated risk management tools, and especially the mandated transparency mechanisms (conformity assessments, reports, etc.) may not be readily accessible or comprehensible to individuals. Affected individuals don't have much voice. There's an accountability to state and business, but less responsiveness to the immediate concerns of individuals.
Civil liability doctrines respond to different values. They have risk management and safety enhancement applications - since they internalise externalities, brining home to cost of harm to the person who caused it. But they also concern themselves with compensation, which entails both retribution to wrongdoers and consolation to victims. The shortcoming of civil liabilty is the cost and complexity of identifying wrongdoers, establishing causation and bringing an action that is contested.
Product liability regulation simplifies questions of causation, reducing the cost of action; while various insurance tools like mandatory insurance or no-fault insurance take the enforcement burden off individuals. Of course, no-fault insurance skews toward compensation as a value, over say retribution.
Other tools for reducing the cost of using civil liability tools include pro bono and civil society led public interest litigation initiatives, and class-action regulations and practices.
Even then, civil liability and related doctrines are in some respect individualistic and make certain assumptions about individuals' capacities to enforce their own legal interests which might not be borne out in real life. Most people don't have the wherewithal to sue or even threaten action against powerful technology providers whose technologies harm them.
This is where competition and consumer law come in. These doctrines are less specifically concerned with safety (although consumer guarantees serve the interest of safety). But their point of difference is in seeking to have systemic impact. They are concerned with asymmetries of power and their effects. At the systemic level competition law, and its enforcement by competition (or 'anti-trust') regulators, seeks to redress asymmetries of market power. Consumer law is more granular and targets consumer interactions by providing recourse for unfair contracts, misleading conduct and unfair advantages taken by those with asymmetrical knowledge.
Each regulatory framework has gaps, and even working together there are no doubt still serious gaps. But there is surely something to be said for ensuring that plural conceptions of justice persist in the governance of AI.
Opmerkingen