Contents
Class 19: AI & the Markets
The Financial and Product Markets
- §§ 1 & 8 (pages 1-7 & 24-26) of Sara Fish, Yannai A. Gonczarowski & Ran Shorrer, Algorithmic Collusion by Large Language Models (Nov. 27, 2024), arXiv:2404.00806v2 [econ.GN]
- 1-6 (lines 1-123) & 9-14 (lines & 184-296) of Philipp Winder, Christian Hildebrand & Jochen Hartmann, Biased_Echos: Generative AI Models Reinforce Investment Biases and Increase Portfolio Risks of Private Investors (Nov. 8, 2024).
- Pages 1754-1587 (top 2 lines) & 1590-1592 (top six lines) & 1596-1602 & 1607-1612 (top five lines) of Tejas N. Narechania, Machine Learning as Natural Monopoly, 107 Iowa L. Rev. 1543 (2022).
- Parts 2.2-2.5 (pages 8-14), Parts 4-6 (pages 17-30), and the chart on page 32, of Jon Danielsson & Andreas Uthemann, On the use of artificial intelligence in financial regulations and the impact on financial stability (Feb. 2024).
- Sections 3 & 4 (Pages 9-27) of (UK) Financial Stability Board, The Financial Stability Implications of Artificial Intelligence (Nov. 14, 2024).
- Margrethe Vestager, Executive Vice-President and Competition Commissioner, European Commission Sarah Cardell, Chief Executive Officer, U.K. Competition and Markets Authority Jonathan Kanter, Assistant Attorney General, U.S. Department of Justice Lina M. Khan, Chair, U.S. Federal Trade Commission, Joint Statement on Competition in Generative AI Foundation Models and AI Products, Joint Statement on Competition in Generative AI Foundation Models and AI Products (Jul 23, 2024).
The Labor Market
- They Will Displace Us–Or Just Boss Us?
- Rob Thubron, Company cuts costs by replacing 60-strong writing team with AI (June 25, 2024). Note the chart!
- Pages 1-8 Madeline C. Elish, (Dis)Placed Workers: A Study in the Disruptive Potential of Robotics and AI, WeRobot (2018 Working Draft).
- Drew Harwell, Contract lawyers face a growing invasion of surveillance programs that monitor their work, Wash Post (Nov. 11, 2021)
- Wait! Maybe They Are Not Coming to Take (All) Our Jobs?
- Will Knight, Robots Won’t Close the Warehouse Worker Gap Anytime Soon, Wired (Nov. 26, 2021).
- Miho Inada, Humanoid Robot Keeps Getting Fired From His Jobs, Wall St. J. (July 13, 2021).
- Will there be a resistance?
- Robert Wells, Robots, AI Not as Welcomed in Nations Where Income Inequity is High, UCF Today (Aug. 24, 2022).
Optional Readings
General / Regulatory
- CFTC Staff Advisory, Use of Artificial Intelligence in CFTC-Regulated Markets (Dec. 5, 2024).
- Read the rest of Jon Danielsson & Andreas Uthemann, On the use of artificial intelligence in financial regulations and the impact on financial stability (Feb. 2024):
- Artificial intelligence (AI) can undermine financial stability because of malicious use, misaligned AI engines and since financial crises are infrequent and unique, frustrating machine learning. Even if the authorities prefer a conservative approach to AI adoption, it will likely become widely used by stealth, taking over increasingly high-level functions, driven by significant cost efficiencies and its superior performance on specific tasks. We propose six criteria against which to judge the suitability of AI use by the private sector for financial regulation and crisis resolution and identify the primary channels through which AI can destabilise the system.
- Contrast Pages 2-5 & 35-36 of Emilio Calvano, et al., Artificial Intelligence, Algorithmic Pricing and Collusion (Dec. 11, 2019), with Cento Veljanovski, What Do We Now Know About ‘Machine Collusion’, 13 J. European Competition L. & Prac. (2022).
- (*) Daniel Schwarcz, Tom Baker & Kyle D. Logue, Regulating Robo-advisors in an Age of Generative Artificial Intelligence, Washington and Lee Law Review (forthcoming 2025):
- New generative Artificial Intelligence (AI) tools can increasingly engage in personalized, sustained and natural conversations with users. This technology has the capacity to reshape the financial services industry, making customized expert financial advice broadly available to consumers. However, AI’s ability to convincingly mimic human financial advisors also creates significant risks of large-scale financial misconduct. Which of these possibilities becomes reality will depend largely on the legal and regulatory rules governing “robo-advisors” that supply fully automated financial advice to consumers. This Article consequently critically examines this evolving regulatory landscape, arguing that current U.S. rules fail to adequately limit the risk that robo-advisors powered by generative AI will convince large numbers of consumers to purchase costly and inappropriate financial products and services. Drawing on general principles of consumer financial regulation and the EU’s recently enacted AI Act, the Article proposes addressing this deficiency through a dual regulatory approach: a licensing requirement for robo-advisors that use generative AI to help match consumers with financial products or services, and heightened ex post duties of care and loyalty for all robo-advisors. This framework seeks to appropriately balance the transformative potential of generative AI to deliver accessible financial advice with the risk that this emerging technology may significantly amplify the provision of conflicted or inaccurate advice.
- Brandon Vigliarolo, Investment advisors pay the price for selling what looked a lot like AI fairy tales, The Register (Mar. 18, 2024):
- Two investment advisors have reached settlements with the US Securities and Exchange Commission for allegedly exaggerating their use of AI, which in both cases were purported to be cornerstones of their offerings.
Canada-based Delphia and San Francisco-headquartered Global Predictions will cough up $225,000 an $175,000 respectively for telling clients that their products used AI to improve forecasts. The financial watchdog said both were engaging in “AI washing,” a term used to describe the embellishment of machine-learning capabilities.
- Two investment advisors have reached settlements with the US Securities and Exchange Commission for allegedly exaggerating their use of AI, which in both cases were purported to be cornerstones of their offerings.
- Carla R. Reyes, Autonomous Corporate Personhood, 96 Wash. L. Rev. 1453 (2021):
- “Several states have recently changed their business organization law to accommodate autonomous businesses—businesses operated entirely through computer code. A variety of international civil society groups are also actively developing new frameworks and a model law—for enabling decentralized, autonomous businesses to achieve a corporate or corporate-like status that bestows legal personhood. Meanwhile, various jurisdictions, including the European Union, have considered whether and to what extent artificial intelligence (AI) more broadly should be endowed with personhood to respond to AI’s increasing presence in society. Despite the fairly obvious overlap between the two sets of inquiries, the legal and policy discussions between the two only rarely overlap. As a result of this failure to communicate, both areas of personhood theory fail to account for the important role that socio-technical and socio-legal context plays in law and policy development. This Article fills the gap by investigating the limits of artificial rights at the intersection of corporations and artificial intelligence. Specifically, this Article argues that building a comprehensive legal approach to artificial rights—rights enjoyed by artificial people, whether corporate entity, machine, or otherwise—requires approaching the issue through a systems lens to ensure that the legal system adequately considers the varied socio-technical contexts in which artificial people exist.”
- (*) Seth C. Oranburg, Machines and Contractual Intent (Draft. Jan. 2022):
- “Machines are making contracts—law is not ready. This paper describes why machine-made contracts do not fit easily into the common law of contracts or the Uniform Commercial Code for Sales. It discusses three ways to fit machine-made contracts into common law and discusses the challenges with each approach. Then it presents a new UCC Sales provision that uses Web3 concepts”.
- (*) Daniel Kiat Boon Seng & Cheng Han Tan, Artificial Intelligence and Agents (Oct. 2021):
- “With the increasing sophistication of AI and machine learning as implemented in electronic agents, arguments have been made to ascribe to such agents personality rights so that they may be treated as agents in the law. The recent decision by the Australian Federal Court in Thaler to characterize the artificial neural network system DABUS as an inventor represents a possible shift in judicial thinking that electronic agents are not just automatic but also autonomous. In addition, this legal recognition has been urged on the grounds that it is only by constituting the electronic agents as legal agents that their human principals may be bound by the agent’s actions and activities, and that a proper foundation of legal liability may be mounted against the human principal for the agent’s misfeasance. This paper argues otherwise. It contends that no matter how sophisticated current electronic agents may be, they are still examples of Weak AI, exhibit no true autonomy, and cannot be constituted as legal personalities. In addition, their characterization as legal agents is unnecessary ….”
Finance / Price Theory
- (*) Wojtek Buczynski et al., Future Themes in Regulating Artificial Intelligence in Investment Management, 56 Comp. L & Sec. Rev. 106111 (2025):
- We are witnessing the emergence of the “first generation” of AI and AI-adjacent soft and hard laws such as the EU AI Act or South Korea’s Basic Act on AI. In parallel, existing industry regulations, such as GDPR, MIFID II or SM&CR, are being “retrofitted” and reinterpreted from the perspective of AI. In this paper we identify and analyze ten novel, “second generation” themes which are likely to become regulatory considerations in the near future: non-personal data, managerial accountability, robo-advisory, generative AI, privacy enhancing techniques (PETs), profiling, emergent behaviours, smart contracts, ESG and algorithm management. The themes have been identified on the basis of ongoing developments in AI, existing regulations and industry discussions. Prior to making any new regulatory recommendations we explore whether novel issues can be solved by existing regulations. The contribution of this paper is a comprehensive picture of emerging regulatory considerations for AI in investment management, as well as broader financial services, and the ways they might be addressed by regulations – future or existing ones.
- Dirk A. Zetzsche, Douglas W. Arner, Ross P. Buckley & Brian Tang, Artificial Intelligence in Finance: Putting the Human in the Loop, 43 Syndey L. Rev. 43 (2021):
- “We argue that the most effective regulatory approaches to addressing the role of AI in finance bring humans into the loop through personal responsibility regimes, thus eliminating the black box argument as a defence to responsibility and legal liability for AI operations and decision.”
- Hans-Tho Normann & Martin Sternberg, Hybrid Collusion: Algorithmic Pricing in Human-Computer Laboratory Markets (May 2021):
- “We investigate collusive pricing in laboratory markets when human players interact with an algorithm. We compare the degree of (tacit) collusion when exclusively humans interact to the case of one firm in the market delegating its decisions to an algorithm. We further vary whether participants know about the presence of the algorithm. We find that threefirm markets involving an algorithmic player are significantly more collusive than human-only markets. Firms employing an algorithm earn significantly less profit than their rivals. For four-firm markets, we find no significant differences. (Un)certainty about the actual presence of an algorithm does not significantly affect collusion.”
- Daniel W. Slemmer, Artificial Intelligence & Artificial Prices: Safeguarding Securities Markets from Manipulation by Non-Human Actors, 14 Brook. J. Corp. Fin. & Com. L. (2020):
- “Problematically, the current securities laws prohibiting manipulation of securities prices rest liability for violations on a trader’s intent. In order to prepare for A.I. market participants, both courts and regulators need to accept that human concepts of decision-making will be inadequate in regulating A.I. behavior. Industry regulators should … require A.I. users to harness the power of their machines to provide meaningful feedback in order to both detect potential manipulations and create evidentiary records in the event that allegations of A.I. manipulation arise.”
- Gary Gensler and Lily Bailey, Deep Learning and Financial Stability (Working Paper, Nov. 1, 2020):
- This paper maps deep learning’s key characteristics across five possible transmission pathways exploring how, as it moves to a mature stage of broad adoption, it may lead to financial system fragility and economy-wide risks. Existing financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the systemic risks posed by broad adoption of deep learning in finance. The authors close by considering policy tools that might mitigate these systemic risks.
- Pascale Chapdelaine, Algorithmic Personalized Pricing, 17 NYU Journal of Law & Business,1 (2020):
- “This article provides parameters to delineate when algorithmic personalized pricing should be banned as a form of unfair commercial practice. This ban would address the substantive issues that algorithmic personalized pricing raises. Resorting to mandatory disclosure requirements of algorithmic personalized pricing would address some of the concerns at a procedural level only, and for this reason is not the preferred regulatory approach. As such, our judgment on the (un)acceptability of algorithmic personalized pricing as a commercial practice is a litmus test for how we should regulate the indiscriminate extraction and use of consumer personal data in the future.”
- (*) Anton Korinekand & Joseph E. Stiglitz, Artificial Intelligence, Globalization, and Strategies for Economic Development, Inst. for New Econ. Thinking Working Paper No. 146 (Feb. 4, 2021):
- “Progress in artificial intelligence and related forms of automation technologies threatens to reverse the gains that developing countries and emerging markets have experienced from integrating into the world economy over the past half century, aggravating poverty and inequality. The new technologies have the tendency to be labor-saving, resource-saving, and to give rise to winner-takes-all dynamics that advantage developed countries. We analyze the economic forces behind these developments and describe economic policies that would mitigate the adverse effects on developing and emerging economies while leveraging the potential gains from technological advances. We also describe reforms to our global system of economic governance that would share the benefits of AI more widely with developing countries
- Megan Ji, Are Robots Good Fiduciaries? Regulating Robo-Advisors Under The Investment Advisers Act Of 1940, 117 Columb. L. Rev. 1543 (2017):
- “In the past decade, robo-advisors—online platforms providing investment advice driven by algorithms—have emerged as a low-cost alternative to traditional, human investment advisers. This presents a regulatory wrinkle for the Investment Advisers Act, the primary federal statute governing investment advice. Enacted in 1940, the Advisers Act was devised with human behavior in mind. Regulators now must determine how an automated alternative fits into the Act’s framework.
“A popular narrative, driven by investment advice professionals and the popular press, argues that robo-advisors are inherently structurally incapable of exercising enough care to meet Advisers Act standards. This Note draws upon common law principles and interpretations of the Advisers Act to argue against this narrative. It then finds that regulators should instead focus on robo-advisor duty of loyalty issues because algorithms can be programmed to reflect a firm’s existing conflicts of interest. The Note concludes by arguing for a shift in regulatory focus and proposing a two-part heightened disclosure rule that would make robo-advisor conflicts of interest more transparent.”
- “In the past decade, robo-advisors—online platforms providing investment advice driven by algorithms—have emerged as a low-cost alternative to traditional, human investment advisers. This presents a regulatory wrinkle for the Investment Advisers Act, the primary federal statute governing investment advice. Enacted in 1940, the Advisers Act was devised with human behavior in mind. Regulators now must determine how an automated alternative fits into the Act’s framework.
Taxation
- Robert Kovacev, A Taxing Dilemma: Robot Taxes and the Challenges of Effective Taxation of AI, Automation and Robotics in the Fourth Industrial Revolution, 16 Ohio St. Tech. L.J. 182 (2020):
- Technological change promises major dislocations in the economy, including potentially massive displacement of human workers. At the same time, government revenues dependent on the taxation of human employment will diminish at the very time displaced workers will increasingly demand social services. It is undeniable that drastic changes will have to be made, but until recently there has been little appetite among policymakers for addressing the situation.
One potential solution to this dilemma has emerged in the public discourse over the past few years: the “robot tax.” This proposal is driven by the idea that if robots (and AI and automation) are displacing human workers, and thereby reducing tax revenues from labor-based taxes, then the robots themselves should be taxed […] [Author argues it is a bad idea for many reasons, including we can’t define what is a “robot”. Also argues tax would need to be global to be effective and not providing advantages to those encouraging automation.]
- Technological change promises major dislocations in the economy, including potentially massive displacement of human workers. At the same time, government revenues dependent on the taxation of human employment will diminish at the very time displaced workers will increasingly demand social services. It is undeniable that drastic changes will have to be made, but until recently there has been little appetite among policymakers for addressing the situation.
- Benjamin Alarie, AI and the Future of Tax Avoidance, 181 Tax Notes Fed. 1808 (Dec. 4, 2023):
- I predict that the influence of AI in tax avoidance will be deeply transformative for our tax and legal systems, demarcating a shift to algorithms capable of interpreting the intricacies of tax legislation worldwide, spotting and exploiting trends in legislation and adjudication, and recommending tax minimization strategies to taxpayers and legislative patches to lawmakers. […]
AI can exploit gaps between different tax regimes, necessitating comprehensive responses. These systems, rich in data and analytics, will predict legislative changes and socio-economic impacts, shaping tax law application and planning.
The essay calls for immediate action in rethinking tax policy principles in the AI era. It highlights the importance of dialogue among policymakers, tax practitioners, and technology experts to ensure AI’s integration into tax planning is beneficial, maintains legal integrity, and supports fiscal fairness. The decisions made today regarding AI in tax avoidance will dictate whether the future of tax planning becomes more equitable or further widens the divide between taxpayers and authorities.”
- I predict that the influence of AI in tax avoidance will be deeply transformative for our tax and legal systems, demarcating a shift to algorithms capable of interpreting the intricacies of tax legislation worldwide, spotting and exploiting trends in legislation and adjudication, and recommending tax minimization strategies to taxpayers and legislative patches to lawmakers. […]
AI can exploit gaps between different tax regimes, necessitating comprehensive responses. These systems, rich in data and analytics, will predict legislative changes and socio-economic impacts, shaping tax law application and planning.
- (*) Ryan Abbott & Bret Bogenschneider, Should Robots Pay Taxes? Tax Policy in the Age of Automation, 12 Harv. L. & Pol. Rev. 145 (2018):
- “The tax system incentivizes automation even in cases where it is not otherwise efficient. This is because the vast majority of tax revenues are now derived from labor income, so firms avoid taxes by eliminating employees. Also, when a machine replaces a person, the government loses a substantial amount of tax revenue—potentially hundreds of billions of dol- lars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once the labor is capital. Robots are not good taxpayers.
“We argue that existing tax policies must be changed. The system should be at least “neutral” as between robot and human workers, and automation should not be allowed to reduce tax revenue. This could be achieved through some combination of disallowing corporate tax deductions for automated workers, creating an “automation tax” which mirrors existing unemployment schemes, granting offsetting tax preferences for human workers, levying a corporate self-employment tax, and increasing the corporate tax rate.”
- “The tax system incentivizes automation even in cases where it is not otherwise efficient. This is because the vast majority of tax revenues are now derived from labor income, so firms avoid taxes by eliminating employees. Also, when a machine replaces a person, the government loses a substantial amount of tax revenue—potentially hundreds of billions of dol- lars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once the labor is capital. Robots are not good taxpayers.
- Robert D. Atkinson, The Case Against Taxing Robots, Information Technology and Innovation Foundation (April 8, 2019):
- Robot taxers make three main arguments in support of their position:
1. If we do not tax robots, then government revenues will decline, because few people will be working;
2. If we do not tax robots, then income inequality will grow, because the share of national income going to labor will fall; and
3. Taxing robots would make the economy more efficient, because governments already tax labor, so not taxing robots at the same rate would reduce allocation efficiency.
As this paper will show, all three arguments are wrong. - (FWIW I think that the third issue is a real one.)
- Robot taxers make three main arguments in support of their position:
Competition Law (Anti-trust)
- (*) Satya Marar, Artificial Intelligence and Antitrust Law: A Primer (Mar. 2, 2024):
- Artificial intelligence (AI) embodies rapidly evolving technologies with great potential for improving human life and economic outcomes. However, these technologies pose a challenge for antitrust enforcers and policymakers. Shrewd antitrust policies and enforcement based on a cost-benefit analysis support a thriving pro-innovation economy that facilitates AI development while mitigating its potential harms. Misguided policies or enforcement can stymie innovation, undermine vigorous economic competition, and deter research investment. This primer is a guide for policymakers and legal scholars that begins by explaining key concepts in AI technology, including foundation models, semiconductor chips, cloud computing, data strategies and others. The next section provides an overview of US antitrust laws, the agencies that enforce them, and their powers. Following that is a brief history of US antitrust law and enforcement with a focus on the consumer welfare standard, its basis and benefits, and the flaws underlying recent calls by the Neo-Brandeisian movement to abandon it. Finally, the primer outlines the law and a procompetitive, pro-innovation policy framework for approaching the intersection between AI technologies and evaluating horizontal and vertical mergers, policing anticompetitive monopolization practices, price fixing and algorithmic collusion, and consumer protection issues.
- (*) Robin Feldman & Caroline A. Yuen, AI and Antitrust: “The Algorithm Made Me Do It” (Oct. 24, 2024):
- As the dawn of AI rises rapidly, competition authorities may wish to contemplate the potential for hazy days ahead. Undoubtedly, AI offers exciting possibilities for society-from sparking innovation, to enhancing efficiency, to easing life’s burdens, to leveling the playing field for non-native speakers. Nevertheless, the future of AI may bring more than the opportunity to bask in its glow. As AI becomes a more accurate and skillful tool, it could conceivably lead to more anticompetitive hub-and-spokes arrangements that current competition laws may not be fully equipped to evaluate. Using the pharmaceutical supply chain as an example of an industry with concentrated intermediaries, this article discusses how such structures are tacit collusion, and how AI is likely to exacerbate the issue.
- Samuel Weinstein, Pricing Algorithms—What Role For Regulation?, Competition Policy International Antitrust Chronicle (Feb. 2024):
- The rapid spread of pricing algorithms in e-commerce markets has raised alarms about their potential for anticompetitive abuse. Enforcers and policy-makers have been concerned for some time about the possibility of widespread algorithmic price-fixing and dominant firms’ use of algorithms to damage rivals. These harms are in theory redressable under the antitrust laws. But evidence is mounting that pricing algorithms will raise prices to consumers in ways that do not violate the antitrust laws. Tacit algorithmic collusion and price increases due to competition among pricing algorithms will make many online goods and services more expensive. Consumers currently have no effective way to fight back against these higher prices. Market-based solutions, like consumer-friendly algorithms that steer buyers to the best prices, can help. But considering the scope and scale of the ongoing revolution in pricing technology, protecting consumers is likely to require a regulatory response. Regulations that limit when and how firms set prices could restrict algorithms’ ability to raise prices above the competitive level. While not costless, this approach might be necessary to prevent a significant transfer of wealth from consumers to sellers.
- Cary Coglianese & Alicia Lai, Algorithms and Competition in the Digital Economy, e-Competitions, Special issue Algorithms & Competition, (Oct. 4, 2023):
- The global economy is increasingly a digital economy driven by algorithms. This shift to a digital or algorithmic economy poses some distinct implications for how antitrust and consumer protection law evolves in the future. With this Foreword to a special issue published by Concurrences , we highlight major antitrust-related legal developments occurring around the world in response to the rapidly emerging environment of algorithmic-driven commerce. Without necessarily endorsing nor rejecting any of the various policies or proposals that have occurred in recent years, we organize and describe key antitrust-related legal developments that have arisen in response to the growth of the digital economy. In Part I, we detail some of the major legal changes or proposed changes that have targeted digital technology firms. Although many of these targeted firms deploy services that use algorithmic tools, competition authorities have not yet begun to do as much to regulate algorithmic services themselves as to target the firms that make use of them. And even though the specifics of some of the regulatory actions targeting digital firms can be said to be distinctive in their focus on online and other digital businesses, many of the concerns underlying regulatory actions or proposals have been, to date, similar to those that have long applied to general business activity. In Part II, we highlight an aspect of antitrust that might become truly novel in an increasingly algorithmic economy: the targeting of antitrust law and principles to business actions driven by algorithms themselves. As algorithmic tools come to automate economic transactions and autonomously make business decisions, the object of governmental oversight may well shift from the traditional focus on human managers to machine ones—or perhaps to the human designers of machine-learning “managers.” This is an emerging possibility which to date can be most saliently seen in the context of self-preferencing algorithms. Although antitrust enforcers appear thus far to target types of self-preferencing behaviors that have emanated from human decisions rather than fully independent algorithmic ones, it is not hard to conceive a future in which AI autonomously drives business decisions in problematic, anticompetitive directions or that operate on their own to charge supracompetitive prices. Finally, in Part III, in the face of a future that seems likely to be dominated by algorithmic transformations throughout the economy, antitrust regulators can expect to face a growing need themselves to develop and rely upon artificial intelligence and other algorithmic tools. The transition to an algorithmic economy, in the end, not only raises new sources of concern about competition and consumer protection, but it may also provide government with new opportunities to use digital tools to advance the goals of fair and efficient economic competition.
Labor Markets
- (*) Mauro Cazzaniga et al., International Monetary Fund, Gen-AI: Artificial Intelligence and the Future of Work (Jan. 2024):
- Artificial intelligence (AI) is set to profoundly change the global economy, with some commentators seeing it as akin to a new industrial revolution. Its consequences for economies and societies remain hard to foresee. This is especially evident in the context of labor markets, where AI promises to increase productivity while threatening to replace humans in some jobs and to complement them in others.Almost 40 percent of global employment is exposed to AI, with advanced economies at greater risk but also better poised to exploit AI benefits than emerging market and developing economies. In advanced economies, about 60 percent of jobs are exposed to AI, due to prevalence of cognitive-task-oriented jobs. …AI will affect income and wealth inequality. Unlike previous waves of automation, which had the strongest effect on middle-skilled workers, AI displacement risks extend to higher-wage earners.[…] Owing to capital deepening and a productivity surge, AI adoption is expected to boost total income. [….]Younger workers who are adaptable and familiar with new technologies may also be better able to leverage the new opportunities.[…] To harness AI’s potential fully, priorities depend on countries’ development levels. A novel AI preparedness index shows that advanced and more developed emerging market economies should invest in AI innovation and integration, while advancing adequate regulatory frameworks to optimize benefits from increased AI use. For less prepared emerging market and developing economies, foundational infrastructural development and building a digitally skilled labor force are paramount. For all economies, social safety nets and retraining for AI-susceptible workers are crucial to ensure inclusivity.
- (*) United Nations, Office of the Secretary-General’s Envoy on Technology & International Labor Organization, Mind the AI Divide: Shaping a Global Perspective (2024):
- There is a pronounced “AI divide” emerging, where high income nations disproportionately benefit from AI advancements, while low- and medium-income countries, particularly in Africa, lag behind. Worse, this divide will grow unless concerted action is taken to foster international cooperation in support of developing countries. The absence of such policies will not only widen global inequalities, but risks squandering the potential of AI to serve as a catalyst for widespread social and economic progress. While AI will potentially affect many aspects of our daily lives, its impact is likely to be most acute in the workplace. Wealthier countries are more exposed to the potential automating effects of AI in the world of work, but they are also better positioned to realize the productivity gains it offers. Developing countries, on the other hand, may be temporarily buffered because of a lack of digital infrastructure, but this buffer risks turning into a bottleneck for productivity growth, and more importantly,
- (*) DATA
- Kunal Handa et al., Anthropic, Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations (Feb 16, 2025):
- Despite widespread speculation about artificial intelligence’s impact on the future of work, we lack systematic empirical evidence about how these systems are actually being used for different tasks. Here, we present a novel framework for measuring AI usage patterns across the economy. We leverage a recent privacy-preserving system [Tamkin et al., 2024] to analyze over four million Claude.ai conversations through the lens of tasks and occupations in the U.S. Department of Labor’s O*NET Database. Our analysis reveals that AI usage primarily concentrates in software development and writing tasks, which together account for nearly half of all total usage. However, usage of AI extends more broadly across the economy, with ~ 36% of occupations using AI for at least a quarter of their associated tasks. We also analyze how AI is being used for tasks, finding 57% of usage suggests augmentation of human capabilities (e.g., learning or iterating on an output) while 43% suggests automation (e.g., fulfilling a request with minimal human involvement). While our data and methods face important limitations and only paint a picture of AI usage on a single platform, they provide an automated, granular approach for tracking AI’s evolving role in the economy and identifying leading indicators of future impact as these technologies continue to advance.
- Xiao Ni, et al, Generative AI in Action: Field Experimental Evidence on Worker Performance in E-Commerce Customer Service Operations (Nov 9, 2024):
- In collaboration with Alibaba, this study leverages a large-scale field experiment to quantify the impact of a generative AI (gen AI) assistant on worker performance in an e-commerce after-sales service setting, where human agents provide customer support through digital chat. Agents were randomly assigned to either a control or treatment group, with the latter having access to a gen AI assistant that offers two automated functions as text messages: 1) diagnosis of customer order issues in real time and 2) solution suggestions. Agents exhibited varied gen AI usage behavior, choosing to use, modify, or disregard AI suggested messages. We employ two empirical approaches: 1) an intention-to-treat (ITT) analysis to estimate the average treatment effect of gen AI access, and 2) a local average treatment effect (LATE) analysis to estimate the causal impact of gen AI usage. Results show that the gen AI assistant significantly enhanced both service speed and service quality. Interestingly, gen AI automation did not lead agents to reduce effort; rather, it increased their engagement, evidenced by a higher message volume and agent-to-customer message ratio. Analysis by agents’ pretreatment performance reveals that low performers experienced greater improvements in speed and quality, narrowing the performance gap, while high performers saw a decline in service quality, likely because gen AI suggestions fell below their expertise. These findings underscore the potential of gen AI to improve operational efficiency and service quality while highlighting the need for tailored deployment strategies to support workers with varying skill levels.
- James Wright, Inside Japan’s long experiment in automating elder care, MIT Tech Rev. (Jan 9, 2023)
- “A growing body of evidence is finding that robots tend to end up creating more work for caregivers.”
- Francesco Filippucci, Peter Gai & Matthias Schief, CEPR, Miracle or myth: Assessing the macroeconomic productivity gains from artificial intelligence (Dec. 8, 2024):
- Artificial intelligence has been shown to deliver large performance gains in selected economic activities, but its aggregate impact remains debated. This column discusses a micro-to-macro framework to assess the aggregate productivity gains from AI under different scenarios. AI could contribute between 0.25 and 0.6 percentage points to annual total factor productivity growth in the US over the next decade. However, highly uneven sectoral productivity gains could reduce aggregate growth, and large aggregate gains will require a productive integration of AI in a wide range of economic activities.
- Kunal Handa et al., Anthropic, Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations (Feb 16, 2025):
- Josh Dzieza, How Hard Will the Robots Make Us Work?, The Verge (Feb. 27, 2020).
- Christine Walley, Robots as Symbols and Anxiety over Work Loss (2020).
- (*) Pegah Moradi & Karen Levy, The Future of Work in the Age of AI: Displacement or Risk-Shifting? in Oxford Handbook of Ethics of AI 271 (Markus Dubber, Frank Pasquale, and Sunit Das, eds, 2020):
- This chapter examines the effects of artificial intelligence (AI) on work and workers. As AI-driven technologies are increasingly integrated into workplaces and labor processes, many have expressed worry about the widespread displacement of human workers. The chapter presents a more nuanced view of the common rhetoric that robots will take over people’s jobs. We contend that economic forecasts of massive AI-induced job loss are of limited practical utility, as they tend to focus solely on technical aspects of task execution, while neglecting broader contextual inquiry about the social components of work, organizational structures, and cross-industry effects. The chapter then considers how AI might impact workers through modes other than displacement. We highlight four mechanisms through which firms are beginning to use AI-driven tools to reallocate risks from themselves to workers: algorithmic scheduling, task redefinition, loss and fraud prediction, and incentivization of productivity. We then explore potential policy responses to both displacement and risk-shifting concerns.\
- Xian Hui, Oren Reshef & Luofeng Zhou, The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market, CESinfo Working paper (August. 2023):
- Generative Artificial Intelligence (AI) holds the potential to either complement knowledge workers by increasing their productivity or substitute them entirely. We examine the short-term effects of the recent release of the large language model (LLM), ChatGPT, on the employment outcomes of freelancers on a large online platform. We find that freelancers in highly affected occupations suffer from the introduction of generative AI, experiencing reductions in both employment and earnings. We find similar effects studying the release of other image-based, generative AI models. Exploring the heterogeneity by freelancers’ employment history, we do not find evidence that high-quality service, measured by their past performance and employment, moderates the adverse effects on employment. In fact, we find suggestive evidence that top freelancers are disproportionately affected by AI. These results suggest that in the short term generative AI reduces overall demand for knowledge workers of all types, and may have the potential to narrow gaps among workers.
- Jay Stanley, ACLU, Amazon Drivers Placed Under Robot Surveillance Microscope (March 23, 2021):
- “AI monitoring will soon move beyond [factory workers], starting with less powerful people across our society — who, like Amazon’s nonmanagerial workforce, are disproportionately people of color and are likely to continue to bear the brunt of that surveillance. And ultimately, in one form or another, such monitoring is likely to affect everyone — and in the process, further tilt power toward those who already have it.”
- Leslie Beslie,My Time on the Assembly Line Working Alongside the Robots That Would Replace Us, The Billfold (May 6, 2014).
- Karen Hao, A new generation of AI-powered robots is taking over warehouses, MIT Tech Rev. (Aug 6, 2021).
- Maybe the problem will be … not enough robots? So argues Wilson da Silva, My Robot Friend_ The Promise of Social Robots, Medium (Apr 5, 2020).
- (*) Tough mathy paper: Daron Acemoglu, Claire LeLarge, Pascual Restrepo, Competing with Robots: Firm-Level Evidence from France (NBER Working Paper No. 26738), February 2020, suggests that workers most hurt by robots are firms that compete with the ones using the robots.
- MIT Work of the Future Task Force, The Work of the Future: Building Better Jobs in an Age of Intelligent Machines (2020).
- News reports (anecdotes?) about robots (many run by an AI) taking jobs:
- Factories (of course), Steven Borowiec, Fearing lawsuits, factories rush to replace humans with robots in South Korea, restofworld.org (June 6, 2022) .
- Hotels: JD Shadel, Robots are disinfecting hotels during the pandemic. It’s the tip of a hospitality revolution, Wash. Post (Jan 27. 2021).
- Restaurants:
- Janet Morissey, Desperate for Workers, Restaurants Turn to Robots, N.Y. Times (Oct. 19, 2021).
- Lauren Saria, This Restaurant Is Run Entirely By Robots, SF Eater (Aug. 17, 2022). (Does this sound attractive?)
- Stores: Mike Oitzman,Restocking robots deployed in 300 Japanese convenience stores, The Robot Report (Aug. 12, 2022) (Note: “Next, Telexistence wants to target the 150,000 convenience stores in the US.”)
Optional related PR video:
- Farms: Farmer George, Tiny Weed-Killing Robots Could Make Pesticides Obsolete, OneZero (July 1, 2020). Alt link
- Hospitals: Andrew Gegory, Robot successfully performs keyhole surgery on pigs without human help, The Guardian (Jan 26, 2022).
Insurance
- (*) International Association of Insurance Supervisors (IAIS), DRAFT Application Paper on the supervision of artificial intelligence (Nov. 2024):
- The adoption of artificial intelligence (AI) systems is accelerating globally. For insurers, these developments offer substantial commercial benefits across the insurance value chain, for example by enhancing policyholder retention through personalised engagement, achieving significant cost reductions via increased efficiency in policy administration and claims management, or applying AI capabilities to improve risk selection and pricing.2. However, with these advancements come notable risks that could detrimentally impact the financial soundness of insurers […] and consumers as well). For example, left unchecked, AI systems can reinforce historic societal biases or discrimination and, for individuals, can increase concerns around data privacy. For insurers, the opaque and complex nature of some AI systems can lead to accountability issues, where it becomes difficult to trace decisions or actions back to human operators, and uncertainty of outcomes (particularly in a changing external environment). Addressing such concerns is paramount to maintaining trust and fairness in the industry.[…] 4. This Application Paper reinforces the importance of the [Insurance Core Principles ICPs,} outlining how existing expectations around governance and conduct remain essential considerations for supervisors and insurers using AI. Furthermore, noting that AI can amplify existing risks, this paper emphasises the importance of continued Board and senior manager education in order to establish robust risk and governance frameworks to ensure good consumer outcomes. Additionally, this paper notes that increasing application of AI can heighten the role of third parties like AI model vendors. Consistent with existing ICPs, this paper reaffirms that insurers remain responsible for understanding and managing these systems and their outcomes
- (*) Anat Lior, Insuring AI: The Role of Insurance in Artificial Intelligence Regulation, 35 Harv. J.L. & Tech. 467 (2022):
- “Insurance has the power to better handle AI-inflicted damages, serving both a preventive and compensatory function. This Article offers a framework for stakeholders and scholars working on AI regulation to take advantage of the current robust insurance system. It will discuss the type of insurance policy that should be purchased and the identity of the policyholder. The utilization of insurance as a regulatory mechanism will alleviate the risks associated with the emerging technology of AI while providing increased security to AI companies and AI users. This will allow different stakeholders to continue to unlock the power of AI and its value to society.”
- Brad Templeton, What Happens To Car Insurance Rates After Self-Driving Cars?, Forbes (Sep 21, 2020).
Other Economic Applications
- (*) Elizabeth Blankespoor, Ed deHaan & Qianqian Li, Generative AI in Financial Reporting (Oct 14, 2024):
- Generative Artificial Intelligence (GAI) such as ChatGPT will likely alter many aspects of the financial reporting process and spawn a deep stream of academic research. We take an early step by examining the extent to which firms have begun using GAI in one important part of the reporting process: writing disclosures. The pre-registration phase of our study evaluates a commercial tool’s ability to detect GAI-modified language in firms’ 10Ks, earnings press releases, and conference call prepared remarks. We find that the tool, GPTZero, is impressively powerful; for example, it reliably identifies GAI in realistic samples when we use GAI to modify as few as 2.5% of sentences in 2.5% of firms’ reports; i.e., when just 0.0625% of text is modified. The postregistration phase examines firms’ actual use of GAI in reports through 2024 and whether GAI usage measurably affects linguistic properties such as disclosure readability — an important factor affecting investor information processing and market outcomes. Our study provides early insights into the use of GAI in financial reporting and motivates future research in this evolving area.
- Modupe James, Auditing AI-Generated Financial Statements (Dec. 20, 2023):
- Unlocking the potential of artificial intelligence (AI) has revolutionized numerous industries, and the world of finance is no exception. As businesses strive to streamline their processes and gain a competitive edge, AI-generated financial statements have emerged as a game-changer. These automated systems can swiftly generate accurate reports, saving auditors valuable time and effort. However, with this incredible innovation comes unique challenges for auditors who must ensure accuracy and compliance in an ever-evolving landscape. In this article, we will explore these challenges and provide essential guidelines for auditors auditing AI-generated financial statements. We will now delve into the fascinating world where cutting-edge technology meets meticulous scrutiny!
AI-generated financial statements are a product of the increasing integration of artificial intelligence in various industries, including finance. These statements are created using algorithms and machine learning techniques to analyze vast amounts of data and generate accurate financial reports.
- Unlocking the potential of artificial intelligence (AI) has revolutionized numerous industries, and the world of finance is no exception. As businesses strive to streamline their processes and gain a competitive edge, AI-generated financial statements have emerged as a game-changer. These automated systems can swiftly generate accurate reports, saving auditors valuable time and effort. However, with this incredible innovation comes unique challenges for auditors who must ensure accuracy and compliance in an ever-evolving landscape. In this article, we will explore these challenges and provide essential guidelines for auditors auditing AI-generated financial statements. We will now delve into the fascinating world where cutting-edge technology meets meticulous scrutiny!
- (*) Przemysław Pałka & Agnieszka Jabłonowska, Consumer Law and Artificial Intelligence in Research Handbook on Law and Artificial Intelligence (Woodrow Barfield & Ugo Pagallo (eds.) (2024):
- The chapter examines developments in artificial intelligence from the point of view of consumer law. First, it offers an overview of various problems consumers might face as a result of a business’s use of AI and explores the ability of existing regulations to combat such threats. Second, it looks at situations where AI is sold as (an element of) a consumer product and points to the relevant legislation. Third, the potential of AI to empower consumers and their organisations is discussed. The goal of the chapter is to provide a broad overview of research directions and literature. It hopes to enable consumer law scholars with a developing interest in AI, as well as AI policy scholars venturing into consumer law, to engage with the most pressing problems at the frontier of research in this area.
- Interesting to compare to articles above and below (and to Weinstein on Pricing Algorithms) …. Michal Gal & Amit Zac, Is Generative AI the Algorithmic Consumer We are Waiting for?, Network L. Rev. (March 2024):
- Most studies on the competitive effects of Generative AI focus on the supply side. Interest in consumers is generally restricted to their roles as users of the technology, as well as indirect trainers of Generative AI models through their prompt engineering. In this short article, we focus instead on the potential effects of Generative AI on competition that arise from an active use of Generative AI on the demand side, by consumers seeking goods and services. In particular, we explore the possibility that Generative AI Large Language Models (LLMs) can act as truncated algorithmic consumers, assisting consumers in deciding which products and services to purchase, thereby potentially reducing consumers’ information costs and increasing competition. We explore how LLMs’ unique characteristics – mainly their conversational use and the provision of an authoritative single answer, as well as spillover trust effects from their other uses – might motivate consumers to use them to search for products and services. We then analyze some of the limitations and competition concerns that might result from the use of Generative AI by consumers. In particular, we show how LLMs’ modus operandi – trained to seek the most plausible next word – lead to outcome homogenization and increase entry barriers for new competitors in product markets. We also explore the potential of manipulation and gaming of LLM models. As elaborated, a combination of an LLM model with a dataset on consumers’ digital profiles might potentially create a strong nudging mechanism, recreating consumer choice architecture to optimize commercial goals and exploiting consumer’s behavioral biases in novel ways not envisioned.
- Horst Eidenmülle, The Advent of the AI Negotiator: Negotiation Dynamics in the Age of Smart Algorithms, 20 J. Bus. Tech. L. (2025):
- Artificial Intelligence (AI) applications are increasingly used in negotiations. In this essay, I investigate the impact of such applications on negotiation dynamics. A key variable in negotiations is information. Smart algorithms will drastically reduce information and transaction costs, improve the efficiency of negotiation processes, and identify optimal value creation options. The expected net welfare benefit for negotiators and societies at large is huge. At the same time, asymmetric information will assist the algorithmic negotiator, allowing them to claim the biggest share of the pie. The greatest beneficiary of this information power play could be BigTech and big businesses more generally. These negotiators will increasingly deploy specialized negotiation algorithms at scale, exploiting information asymmetries and executing value claiming tactics with precision. In contrast, smaller businesses and consumers will likely have to settle for generic tools like the free version of ChatGPT. However, who will ultimately be the big winners in AI-powered negotiations depends crucially on the laws that regulate the market for AI applications.
- Barak Orbach & Eli Orbach,
- The US Is Not Prepared for the AI Electricity Demand Shock (Oct 24, 2024):
- The United States power grid is increasingly strained by the surging electricity demand driven by the AI boom. Efforts to modernize the power infrastructure are unlikely to keep pace with the rising demand in the coming years. We explore why competition in AI markets may create an electricity demand shock, examine the associated social costs, and offer several policy recommendations.
Notes & Questions
- Narechania argued that ML tends to natural monopoly–due to high barriers to entry, both in hardware and human capital, and due to ‘feedback’ from users that resembles network effects. He offered GPT-3 as an example, albeit two years ago (p. 1583), although that’s a generative model.
- Is the argument equally good for standard ML and for generative AI?
- Has more recent history suggested that maybe one or both don’t fit the natural monopoly story?
- Is there any reason to believe that recent history is atypical, and the ML / GenAI supplier market(s) will soon ‘settle down’ something monopolistic or oligopolistic? Any reason to doubt that?
- Assuming that ML and/or GenAI are natural monopolies in general, or in specific industries, is ordinary anti-trust law sufficient to deal with the problem? If not, what new rules do we need?
- This recent video got a lot of attention
- But in fact there is a back story. Key points: the video was a demo of a special ability programmed into the two chatbots programmed to respond in this fashion if they detected they were conversing with another chatbot. Plus, the AIs didn’t invent the language on their own.
- There is, however, some evidence that chatbots built to cooperate or negotiate might develop at least a shorthand in the wild. The video resonates when one considers the risks of AI cooperation in ways that violate or circumvent legal rules.
- This recent video got a lot of attention
- Is AI really a threat to securities and other markets as we know them?
- If so, what is the solution?
- To the extent you envision an institutional regulatory solution would it be better to
- Put the regulatory authority in an agency whose focus was AI (and might have more technical experts on AI)?
- Put the regulatory authority in one or more existing (or new?) agencies dedicated to regulation of financial markets (e.g. SEC, CFTC, Treasury and/or CFPB)?
- Put the regulatory authority in the FTC which (with the Justice Department) regulates monopolies?
- What substantive rules would we need to head off the risks of
- fraud
- market manipulation (e.g. pump and dump)
- collusion
… to securities and other markets?
- Recall Lynn Lopucki’s argument (Class 8) that by using various corporate devices, an AI could own itself and thus become a legal person to the extent, at least, that we allow corporations to be considered legal persons. Does this alter your view of the economic or regulatory issues discussed in this section?
- Historically, new tech kills some old jobs, but creates as least as many new ones.
- What if anything should we do as a society, or as policy-makers, owe to the losers? Especially if job loss is not their “fault” as the industry changes.
- If the ‘something’ we do involves payments to workers or retraining facilities is this
- a general social responsibility
- or something that should fall particularly on the beneficiaries (makers, sellers, users) of the new technology?
- If this involves training,
- what do we do if the losers (e.g truckers) are not easily trained to do the new tasks (e.g. coding) whether due to educational background, temperament, or age?
- Or is, as Walley suggests, the identification between education level and “skills” largely a myth.
- Or is Andrew Yang today’s visionary and the rise of the robots will force us into a Universal Basic Income?
- Most tech revolutions come with lots of people saying “this time is different”.
- What reasons, if any, do we have for the claim that “this time is different” and thus there could be a net loss of jobs due to robots?
- Note that there are a lot of truckers (estimates range from 1 million to 3.5 million combining long-haul, short haul, and tractor-trailers) and also a lot of retail service jobs (c. 10 million if we include first-level floor supervisors) that might be at risk.
- This revolution might also hit professions:
- Insurance agents,
- Doctors,
- And, yes, lawyers.
- Do we think professionals as a class might be more ‘retrainable’ or movable to new jobs given they usually have more education than truckers, warehoused employees, or assembly line workers? Of course it could mean a pay cut, but that is better than unemployment…
- What reasons, if any, do we have for the claim that “this time is different” and thus there could be a net loss of jobs due to robots?
- What does co-working with robots (sometimes, not so often, called-cobots — co-robotics is a more common term) do to the nature/quality of work? Does it “turn people into robots”? [See optional reading – Leslie Beslie, My Time on the Assembly Line Working Alongside the Robots That Would Replace Us, The Billfold (May 6, 2014)]
- Call center employees are increasingly being replaced by robotic menus. If you do get a human, that person is asked to ‘stick to a script’ and often is dis-empowered from being able to escalate problems or fix any unusual ones. This discourages complaining customers and saves money (firms tend to see call center operations as a cost center, especially after sale, not a branding opportunity).
- Do we say, “that’s capitalism” and move on, or say/do something else?
- If “something else” is our primary concern the worker, or something else?
- Originally the hope was that robots might do the most demanding and dangerous jobs, and that has proved somewhat true if we define “demanding” as “finding and lifting heavy stuff” or “disinfecting hospitals and hotel rooms for COVID”. It’s less true, at least at Amazon, when it comes to more complex sorting tasks. Is this something to worry about, or will it fix itself as robots get better/smarter/more dexterous?
- Modern scheduling algorithms often can predict demand for labor very precisely – but not very far in advance (yet). A consequence is that firms demand workers be available for long periods of the week, but at the last minute may or may not ask them to come into work; no work, no pay. But workers can’t take a second job that might fill those missing hours, because they can’t ever promise to be available in the blocks of time they’ve promised to the first employer. These so-called “zero hours” contracts are the subject of intense debate in the UK and Europe and are partly blamed for the “Great Resignation” in the US. Should they be allowed?
- Call center employees are increasingly being replaced by robotic menus. If you do get a human, that person is asked to ‘stick to a script’ and often is dis-empowered from being able to escalate problems or fix any unusual ones. This discourages complaining customers and saves money (firms tend to see call center operations as a cost center, especially after sale, not a branding opportunity).
- Just because surveys show that people living in Nordic countries are happier and live longer than most other places, is that any reason to look to them as a model for how to structure the employment and public benefits relationships?
- Or can we avoid that horrible fate with a AI/robot tax? (See the Kovacev article in the optional reading if this question interests you.)
- Even if we could in theory go Nordic, if job losses due to automation are as massive as the scariest (if perhaps over-alarmist) estimates suggest, how otherwise will we pay for it? Does the AI/robot tax then become not a way of avoiding ‘going Nordic’ but still paying for it?
- Regardless of the overall political goals, if our tax goal is an efficient “Pigouvian tax” i.e. one equal to the externalities caused by the AI/robot, what do we do if the AI/robot is actually good for society, and has a positive externality? (Do we subsidize it?)
- If we have one, does a AI/robot tax have to be global to be effective? Would national suffice? Would a state-level tax suffice in many states?
Class 20: Ethics Issues
- Brian Patrick Green, Artificial Intelligence and Ethics, Markkula Center for Applied Ethics (Nov. 21, 2017).
- Annette Zimmermann and Bendert Zevenbergen, AI Ethics: Seven Traps, Freedom to Tinker Blog (Mar. 25, 2019).
- Religious Approaches
- Ethics & Religious Liberty Commission of the Southern Baptist Convention, Artificial Intelligence: An Evangelical Statement of Principles (April 11, 2019).
- Christian Today, Bishop issues ’10 commandments of Artificial Intelligence’ (Feb. 28, 2018).
- Rome Call for AI Ethics (Feb 28, 2022).
- Professional Approaches
- ACM, Statement on Principles for Responsible Algorithmic Systems (Oct. 26, 2022).
- International Approaches
- United Nations General Assembly, Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development (Adopted on March 21, 2024).
- Lawyers’ Professional Ethics Duties (Review from class 18)
- Florida Bar Ethics Opinion 24-1 (Jan. 19, 2024).
- American Bar Association, Formal Opinion 512 (July 29, 2024)
- Court of International Trade, Order on Artificial Intelligence (June 8, 2023).
- Critique
- Abeba Birhane & Jelle van Dijk,Robot Rights? Let’s Talk about Human Welfare Instead (Jan 14, 2020).
- Daniel Schiff et al, AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection, 2 IEEE Trans. Tech. & Soc. 31 (2021).
Optional
In General
- (*) Shumiao Ouyang, Hayong Yun & Xingjian Zheng, How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs (Feb 1, 2025):
- This study examines the risk preferences of Large Language Models (LLMs) and how aligning them with human ethical standards affects their economic decision-making. Testing 50 LLMs across self-reported and simulated investment tasks, we find wide variation in risk attitudes. Notably, models scoring higher on safety metrics tend to exhibit greater risk aversion. Through a direct alignment exercise, we establish that embedding human values—harmlessness, helpfulness, and honesty—causally shifts LLMs toward more cautious decision-making. While moderate alignment improved financial forecasting, excessive alignment led to overcautious decisions that hurt predictive accuracy. This trade-off underscores the need for AI governance that balances ethical safeguards with domain-specific risk-taking, ensuring alignment mechanisms don’t overly hinder AI-driven decision-making in finance and other economic domains.
- (*) Morten Bay, AI Ethics and Policymaking: Rawlsian Approaches to Democratic Participation, Transparency, Accountability, and Prediction (May 31, 2023):
- “The AI ethics field is seeing an increase in explorations of theoretical ethics in addition to applied ethics, and this has spawned a renewed interest in John Rawls’ theory of justice as fairness and how it may apply to AI. But how may these new, Rawlsian contributions inform regulatory policies for AI? This article takes a Rawlsian approach to four key policy criteria in AI regulation: Democratic participation, transparency, accountability, and the epistemological value of prediction. Rawlsian, democratic participation in the light of AI is explored through a critique of Ashrafian’s proposed approach to Rawlsian AI ethics, which is found to contradict other aspects of Rawls’ theories. A turn toward Gabriel’s foundational theoretical work on Rawlsian justice in AI follows, extending his explication of Rawls’ Publicity criterion to an exploration of how the latter can be applied to real-world AI regulation and policy. Finally, a discussion of a key AI feature, prediction, demonstrates how AI-driven, long-term, large-scale predictions of human behavior violate Rawls’ justice as fairness principles. It is argued that applications of this kind are expressions of the type of utilitarianism Rawls vehemently opposes, and therefore cannot be allowed in Rawls-inspired policymaking.”
- Salla Westerstrand, Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence, 30 Sci. & Eng. Ethics 46 (2024):
- The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls’s theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls’s theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls’s theory of justice as fairness.
- European Parliamentary Research Service, European framework on ethical aspects of artificial intelligence, robotics and related technologies (Sept. 2020)
- European Parliamentary Research Service, The ethics of artificial intelligence: Issues and initiatives (March 2020).
- Meta, AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open, Safe, Responsible AI December 4, 2023.
- Stanford Encyclopaedia of Philosophy, Ethics of Artificial Intelligence and Robotics (Apr. 30, 2020).
- ISO, Building a responsible AI: How to manage the AI ethics debate (Jan. 2025) :
- Responsible AI is the practice of developing and using AI systems in a way that benefits society while minimizing the risk of negative consequences. It’s about creating AI technologies that not only advance our capabilities, but also address ethical concerns – particularly with regard to bias, transparency and privacy. This includes tackling issues such as the misuse of personal data, biased algorithms, and the potential for AI to perpetuate or exacerbate existing inequalities. The goal is to build trustworthy AI systems that are, all at once, reliable, fair and aligned with human values.
- Thilo Hagendorff, The Ethics of AI Ethics: An Evaluation of Guidelines, 30 Minds and Machines 99 (2020)
- Brent Mittelstadt, Principles Alone Cannot Guarantee Ethical AI, Nature Machine Intelligence (Nov. 5, 2019).
- Rodrigo Ochigame, The Invention of “Ethical AI”: How Big Tech Manipulates Academia to Avoid Regulation, The Intercept (Dec. 20, 2019).
Specific Perspectives
- (*) Chinmayi Sharma, AI’s Hippocratic Oath, __ Wash U. L. Rev __ (Forthcoming):
- Diagnosing diseases, creating artwork, offering companionship, analyzing data, and securing our infrastructure—artificial intelligence (AI) does it all. But it does not always do it well. AI can be wrong, biased, and manipulative. It has convinced people to commit suicide, starve themselves, arrest innocent people, discriminate based on race, radicalize in support of terrorist causes, and spread misinformation. All without betraying how it functions or what went wrong.A burgeoning body of scholarship enumerates AI harms and proposes solutions. This Article diverges from that scholarship to argue that the heart of the problem is not the technology but its creators: AI engineers who either don’t know how to, or are told not to, build better systems. Today, AI engineers act at the behest of self-interested companies pursuing profit, not safe, socially beneficial products. The government lacks the agility and expertise to address bad AI engineering practices on its best day. On its worst day, the government falls prey to industry’s siren song. Litigation doesn’t fare much better; plaintiffs have had little success challenging technology companies in court.This Article proposes another way: professionalizing AI engineering. Require AI engineers to obtain licenses to build commercial AI products, push them to collaborate on scientifically-supported, domain-specific technical standards, and charge them with policing themselves. This Article’s proposal addresses AI harms at their inception, influencing the very engineering decisions that give rise to them in the first place. By wresting control over information and system design away from companies and handing it to AI engineers, professionalization engenders trustworthy AI by design. Beyond recommending the specific policy solution of professionalization, this Article seeks to shift the discourse on AI away from an emphasis on light-touch, ex post solutions that address already-created products to a greater focus on ex ante controls that precede AI development. We’ve used this playbook before in fields requiring a high level of expertise where a duty to the public welfare must trump business motivations. What if, like doctors, AI engineers also vowed to do no harm?
- (*) Pasclae Fung & Hubert Etienne, Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU, 3 AI and Ethics 505 (2023):
- We propose a comparative analysis of the AI ethical guidelines endorsed by China (from the Chinese National New Gen-eration Artificial Intelligence Governance Professional Committee) and by the EU (from the European High-level Expert Group on AI). We show that behind an apparent likeness in the concepts mobilized, the two documents largely differ in their normative approaches, which we explain by distinct ambitions resulting from different philosophical traditions, cultural heritages and historical contexts. In highlighting such differences, we show that it is erroneous to believe that a similarity in concepts necessarily translates into a similarity in ethics as even the same words may have different meanings from a country to another—as exemplified by that of “privacy”. It would, therefore, be erroneous to believe that the world would have adopted a common set of ethical principles in only three years. China and the EU, however, share a common scientific method, inherited in the former from the “Chinese Enlightenment”, which could contribute to better collaboration and understanding in the building of technical standards for the implementation of such ethics principles.
- Ben Green, Data Science as Political Action: Grounding Data Science in a Politics of Justice (Jan 14, 2019).
- IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (1st Ed.) (valuable but long).
- Kinfe Yilma, African AI Ethics?—The Role of AI Ethics Initiatives in Africa (Aug 13, 2024):
- A recent concern in Artificial Intelligence (AI) ethics scholarship is the overly western-centric nature of ongoing AI ethics discourse and initiatives. This has recently prompted many commentators to warn the emergence of an epistemic injustice or ‘ethical colonialism’. This article examines the extent to which Ubuntu, and AI strategies in Africa articulate an African perspective of AI, and hence address the epistemic injustice in AI ethics. I argue that neither the normative structure of Ubuntu nor recent AI strategies offer a clear, coherent and practicable framework of ‘African AI ethics’. I further show that the much-touted ‘African’ ethics of Ubuntu is rarely referenced or implied in the other national or continental AI strategy initiatives.
- Dorine Eva van Norren, The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective, 21 J. Info., Communication and Ethics in Society (Dec. 2022):
- This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is captured in its language. What are the implications for AI when seen from a collective ontology? Ubuntu (I am a person through other persons) starts from collective morals rather than individual ethics. [….] “Metaphysically, Ubuntu and its conception of social personhood (attained during one’s life) largely rejects transhumanism. When confronted with economic choices, Ubuntu favors sharing above competition and thus an anticapitalist logic of equitable distribution of AI benefits, humaneness and nonexploitation. When confronted with issues of privacy, Ubuntu emphasizes transparency to group members, rather than individual privacy, yet it calls for stronger (group privacy) protection. In democratic terms, it promotes consensus decision-making over representative democracy. Certain applications of AI may be more controversial in Africa than in other parts of the world, like care for the elderly, that deserve the utmost respect and attention, and which builds moral personhood. At the same time, AI may be helpful, as care from the home and community is encouraged from an Ubuntu perspective. The report on AI and ethics of the UNESCO World COMEST formulated principles as input, which are analyzed from the African ontological point of view. COMEST departs from “universal” concepts of individual human rights, sustainability and good governance which are not necessarily fully compatible with relatedness, including future and past generations. Next to rules based approaches, which may hamper diversity, bottom-up approaches are needed with intercultural deep learning algorithms.”
- David Zvi Kalman, 3 reasons why A.I. must be a religious issue and not just a peripheral one, Jello Menorah (Dec. 8, 2022).
Ethical Problems/Hypotheticals
- (*) Gladstone AI, Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI (Feb. 26, 2024):
- The recent explosion of progress in advanced artificial intelligence (AI) has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like (WMD-like) and WMD-enabling catastrophic risks. A key driver of 1 these risks is an acute competitive dynamic among the frontier AI labs that are 2 building the world’s most advanced AI systems. All of these labs have openly declared an intent or expectation to achieve human-level and superhuman artificial general intelligence (AGI) — a transformative technology with profound implications for 3 democratic governance and global security — by the end of this decade or earlier.
The risks associated with these developments are globa(*)l in scope, have deeply technical origins, and are evolving quickly. As a result, policymakers face a diminishing opportunity to introduce technically informed safeguards that can balance these considerations and ensure advanced AI is developed and adopted responsibly. These safeguards are essential to address the critical national security gaps that are rapidly emerging as this technology progresses.
Frontier lab executives and staff have publicly acknowledged these dangers. Nonetheless, competitive pressures continue to push them to accelerate their investments in AI capabilities at the expense of safety and security. The prospect of inadequate security at frontier AI labs raises the risk that the world’s most advanced AI systems could be stolen from their U.S. developers, and then that they could at some point lose control of the AI systems they themselves are developing , with potentially devastating consequences to global security.
- The recent explosion of progress in advanced artificial intelligence (AI) has brought great opportunities, but it is also creating entirely new categories of weapons of mass destruction-like (WMD-like) and WMD-enabling catastrophic risks. A key driver of 1 these risks is an acute competitive dynamic among the frontier AI labs that are 2 building the world’s most advanced AI systems. All of these labs have openly declared an intent or expectation to achieve human-level and superhuman artificial general intelligence (AGI) — a transformative technology with profound implications for 3 democratic governance and global security — by the end of this decade or earlier.
- (*) Christian Terwiesch and Lennart Meincke, The AI Ethicist: Fact or Fiction? (Working Paper, Oct, 11, 2023):
- This study investigates the efficacy of an AI-based ethical advisor using the GPT-4 model. Drawing from a pool of ethical dilemmas published in the New York Times column “The Ethicist”,,” we compared the ethical advice given by the human expert and author of the column, Dr. Kame Anthony Appiah, with AI-generated advice. The comparison is done by evaluating the perceived usefulness of the ethical advice across three distinct groups: random subjects recruited from an online platform, Wharton MBA students, and a panel of ethical decision-making experts comprising academics and clergy. Our findings revealed no significant difference in the perceived value of the advice between human generated ethical advice and AI -generated ethical advice. When forced to choose between the two sources of advice, the random subjects recruited online displayed a slight but significant preference for the AI-generated advice, selecting it 60% of the time, while MBA students and the expert panel showed no significant preference.
- (* – as a group) AI Companions
- Kevin Roose, Meet My A.I. Friends, NY Times (May 9, 2014).
- Julian De Freitas et al, Why Most Resist AI Companions (Jan 2025).
- Eileen Guo, An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it, MIT Tech. Rev. (Feb. 6, 2025).
- Luciano Floridi, Hypersuasion – On AI’s Persuasive Power and How to Deal With It (Jun 7, 2024):
- The evolution of persuasive technologies (PT) has reached a new frontier with the advent of Artificial Intelligence (AI). This article explores AI’s power of hyper persuasion (hypersuasion)—the intensive and transformative power of AI to influence beliefs and behaviours through personalized, data-driven strategies. By processing vast amounts of data and tailoring content to individual susceptibilities, AI significantly enhances the capabilities of PT. The ethical implications of AI’s hypersuasion are examined, considering its potential to empower persuaders, strengthen messages and goals, and disempower the persuadable. While the misuse of AI’s persuasive power by malicious actors poses significant risks, the article suggests four complementary strategies to mitigate negative consequences: protecting privacy to reinforce autonomy, fostering pluralistic competition among persuaders, ensuring accountability through regulation and alignment with human values, and promoting digital literacy and public engagement. By proactively addressing the challenges of AI’s hypersuasion, its power can be harnessed to support better decisions and behaviours while safeguarding individual autonomy and fostering a sustainable and preferable society.
- (*) Victoria J. Haneman, The Law of Digital Resurrection,__ B.C. L. Rev. __ (forthcoming 2025):
- The digital right to be dead has yet to be recognized as an important legal right. Artificial intelligence, augmented reality, and nanotechnology have progressed to the point that personal data can be used to resurrect the deceased in digital form with appearance, voice, emotion, and memory recreated to allow interaction with a digital app, chat bot, or avatar that may be indistinguishable from that with a living person. Users may now have a completely immersive experience simply by loading the personal data of the deceased into a neural network to create a chatbot that inherits features and idiosyncrasies of the deceased and dynamically learns with increased communication. There is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This is an intersection of death, technology, and privacy law that has remained relatively ignored until recently. This Article is the first to respect death as an important and distinguishing part of the conversation about regulating digital resurrection. Death has long had a strained relationship with the law, giving rise to dramatically different needs and idiosyncratic legal rules. The law of the dead reflects the careful balance between the power of the state and an individual’s wishes, and it may be the only doctrinal space in which we legally protect remembrance. This Article frames the importance of almost half of a millennium of policy undergirding the law of the deceased, and proposes a paradigm focused upon a right of deletion for the deceased over source material (data), rather than testamentary control over the outcome (digital resurrection), with the suggestion that existing protections are likely sufficient to protect against unauthorized commercial resurrections.
- (*) Emmie Hine et al., Supporting Trustworthy AI Through Machine Unlearning, 30 Science and Engineering Ethics, issue 5, 2024[10.1007/s11948-024-00500-5]:
- Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology.
- Mauricio Figueroa, Affection as a Service: Ghostbots and the Changing Nature of Mourning, 52 Comp. L. & Security Rev (Nov. 20, 2024) [10.1016/j.clsr.2024.105943]:
- This article elucidates the rise of ghostbots, artificial conversational agents that emulate the deceased, as marketable commodities. The study explains the role of ghostbots in changing how mourning is experienced. It highlights how ghostbots alter the relationship between the bereaved and the departed, transforming it into one of a customer-object within legal discourse. By critically examining the nexus between commodification and the law, this study underscores how ghostbots signify a different and intriguing form of commodification in the interaction between the living and the deceased, within the dynamics of the Digital Afterlife Industry. By furnishing this scrutiny, the article contributes to comprehending the commodification inherent in ghostbots and concludes by delineating specific foundational or seminal points for subsequent academic discussion to aide a more holistic deliberation on the use, commercialisation, or regulation of these systems, and other affection-as-a-service products.
- (*) Ivan Evtimov, David O’Hair, Earlence Fernandes, Ryan Calo & Tadayoshi Kobno, Is Tricking a Robot Hacking?, 34 Berk. Tech. L.J. 891 (2019):
- The unfolding renaissance in artificial intelligence (Al), coupled with an almost-parallel discovery of considerable vulnerabilities, requires a reexamination of what it means to “hack,” i.e., to compromise a computer system. The stakes are significant. Unless legal and societal frameworks adjust, the consequences of misalignment between law and practice will result in (1) inadequate coverage of crime, (2) missing or skewed security incentives, and (3) the prospect of chilling critical security research. This last consequence is particularly dangerous in light of the important role researchers play in revealing the biases, safety limitations, and opportunities for mischief that the mainstreaming of artificial intelligence may present.This essay introduces the law and policy community, within and beyond academia, to the ways adversarial machine learning (ML) alters the nature of hacking, and with it, the cybersecurity landscape. Using the Computer Fraud and Abuse Act of 1986 (CFAA)-the paradigmatic federal anti-hacking law- as a case study, we hope to demonstrate the burgeoning disconnect between law and technical practice. And we hope to explain the stakes if we fail to address the uncertainty that flows from hacking that now includes tricking.
- Petra Molnar, Technology on the Margins: AI and global migration management from a human rights perspective, 8 Camb. Int’l L.J. 305 (2019).
- (*) Madeline Forster, Refugee protection in the artificial intelligence era:A test case for rights, Chatham House (Sept. 7, 2022):
- Government and private sector interest in artificial intelligence (AI) for border security and for use in asylum and immigration systems is growing. Academics and civil society are calling for greater scrutiny of legal, technological and policy developments in this area. However, compared to other high-risk environments for AI, this sector has received little policy attention.Whether governments can adopt AI and meet human rights obligations in asylum and immigration contexts is in doubt, particularly as states have specific responsibilities towards persons seeking refugee and humanitarian protection at national borders.The risks include potentially significant harm if AI systems lead (or contribute) to asylum seekers being incorrectly returned to their country of origin or an unsafe country where they may suffer persecution or serious human rights abuses – a practice known as ‘refoulement’. The use of AI in asylum contexts also raises questions of fairness and due process.”Artificial intelligence (AI) is being introduced to help decision-making […] about asylum and refugee protection, where automated ways of processing people and predicting risks in contested circumstances hold great appeal.”This field, even more than most, will act as a test case for how AI protects or fails to protect human rights. Wrong or biased decisions about refugee status can have life and death consequences, including the return of refugees to places where they face persecution, contrary to international law. Existing refugee decision-making systems are already complex and are often affected by flaws, including lack of legal remedies – issues that can be exacerbated when overlayed with AI.”This paper examines the primary protections being proposed to make AI more responsive to human rights, including the upcoming EU AI Law. Can innovation and protection of human rights really be combined in asylum systems and other domains that make decisions about the future of vulnerable communities and minorities? This is a question not just for governments but also for private sector providers, which have independent human rights responsibilities when providing AI products in a politically charged and changeable policy field that decides the future of vulnerable communities and minorities.”
[…] “Particular attention must be paid at national and regional level to how AI tools can support human rights-based decision-making in complex and politicized systems without exacerbating existing structural challenges. How we treat asylum seekers and refugees interacting with AI will be a test case for emerging domestic and regional legislation and governance of AI. Global standard-setting exercises for AI – including UN-based technical standards and high-level multinational initiatives – will also influence the direction of travel.”
- Government and private sector interest in artificial intelligence (AI) for border security and for use in asylum and immigration systems is growing. Academics and civil society are calling for greater scrutiny of legal, technological and policy developments in this area. However, compared to other high-risk environments for AI, this sector has received little policy attention.Whether governments can adopt AI and meet human rights obligations in asylum and immigration contexts is in doubt, particularly as states have specific responsibilities towards persons seeking refugee and humanitarian protection at national borders.The risks include potentially significant harm if AI systems lead (or contribute) to asylum seekers being incorrectly returned to their country of origin or an unsafe country where they may suffer persecution or serious human rights abuses – a practice known as ‘refoulement’. The use of AI in asylum contexts also raises questions of fairness and due process.”Artificial intelligence (AI) is being introduced to help decision-making […] about asylum and refugee protection, where automated ways of processing people and predicting risks in contested circumstances hold great appeal.”This field, even more than most, will act as a test case for how AI protects or fails to protect human rights. Wrong or biased decisions about refugee status can have life and death consequences, including the return of refugees to places where they face persecution, contrary to international law. Existing refugee decision-making systems are already complex and are often affected by flaws, including lack of legal remedies – issues that can be exacerbated when overlayed with AI.”This paper examines the primary protections being proposed to make AI more responsive to human rights, including the upcoming EU AI Law. Can innovation and protection of human rights really be combined in asylum systems and other domains that make decisions about the future of vulnerable communities and minorities? This is a question not just for governments but also for private sector providers, which have independent human rights responsibilities when providing AI products in a politically charged and changeable policy field that decides the future of vulnerable communities and minorities.”
- David Leslie et al., AI Sustainability in Practice Part One: Foundations for Sustainable AI Projects (Mar. 18, 2024):
- Sustainable AI projects are continuously responsive to the transformative effects as well as short-, medium-, and long-term impacts on individuals and society that the design, development, and deployment of AI technologies may have. Projects, which centre AI Sustainability, ensure that values-led, collaborative, and anticipatory reflection both guides the assessment of potential social and ethical impacts and steers responsible innovation practices.This workbook is the first part of a pair that provides the concepts and tools needed to put AI Sustainability into practice. It introduces the SUM Values, which help AI project teams to assess the potential societal impacts and ethical permissibility of their projects. It then presents a Stakeholder Engagement Process (SEP), which provides tools to facilitate proportionate engagement of and input from stakeholders with an emphasis on equitable and meaningful participation and positionality awareness.
- Jon Truby, AI Is Already Killing Us, 27 J. Internet L. 7 (March 2024):
- Emissions and environmental damage from AI is risking human existence and requires law and policy intervention towards digital decarbonisationWhile the rapid uptake in AI presents opportunities and solutions in many fields, the unintended consequence is an alarming increase in energy usage resulting from the data centres supporting AI tools and training. Predictions of data centres powering AI surpassing entire countries in energy consumption could not have come at a more critical time, given the planet’s climate emergency. The article provides an overview of the causes and emphasizes that sustainable alternative designs of the technology can help mitigate the energy inefficiencies and resultant climate damage caused by AI. From the perspective of the EU, the article argues for legal and policy interventions in the market to influence choices in favour of sustainable AI over energy-intensive AI, and to drive a shift in the design of data centres towards achieving digital decarbonization. Proposing the categorization of different AI based on their life-cycle climate impact aims to facilitate intervention measures, such as including carbon-intensive AI in the EU AI Act’s High-Risk AI category. Furthermore, the article suggests training AI developers in technology sustainability to promote the climate-neutral design of AI technology.
- Stable Diffusion.
- Stable Diffusion Launch Announcement, stability.ai (PDF) (link to original, with better formatting) (Aug. 10, 2022)
- Stable Diffusion Public Release, stability.ai (PDF) (link to original, with better formatting) (Aug. 22, 2022)
- Kyle Wiggers, Deepfakes for all: Uncensored AI art model prompts ethics questions , TechCrunch (Aug. 24, 2022)
Notes & Questions
- The Green article provides a nice laundry list of things that a person designing or creating an AI ought to think about.
- That said, some are quite tough issues that might require a lot of information, some of it not necessarily easily available to many people involved in a big project, e.g.
- How good is the training data?
- Will this system put people out of work?
- Will it cause moral harms to people?
- What will the effects be “on the human spirit”.
- Are there many other professions where we ask people to think about such issues as part of their jobs?
- If not, is that because those professions don’t involve similar risks?
- Or, perhaps, we should all be asking these questions all the time?
- That said, some are quite tough issues that might require a lot of information, some of it not necessarily easily available to many people involved in a big project, e.g.
- The first two readings could, however, be read as in tension, although not perhaps outright opposition: the Zimmermann & Zevenbergen reading on “Traps” provides a list of things to watch out for while one is thinking ethically about issues such as those identified in the Green article.
- Does making ethics this hard increase the risk more people will just not bother?
- Does failing to make ethics this hard make it not serious and useful?
- Many people look to religion, or religious leaders, for ethical guidance.
- How do the concerns identified by the Southern Baptist Convention compare to Green’s list?
- What are the overlaps? The differences?
- In case it wasn’t clear from the context, the “10 commandments” in the readings were issued by a Church of England [in US terms, Anglican] Bishop.
- How does this list compare to the Southern Baptists’ list?
- Incidentally, I apologize for not finding a wider variety of religious leaders’ thoughts, but it’s surprisingly difficult to find short and accessible articles that deal with the sort of (realistic) AI this course is about. For example, articles on Jewish Law and AI seem to devolve quickly to discussions of Golems, which isn’t really our focus….
- The Vatican-led Rome Call for AI Ethics was signed by a high-ranking Catholic Arch-Bishop, and by Microsoft, IBM, the Director-General of the Food and Agriculture Organization of the United Nations (FAO) [formerly a ranking official of the Chinese government, and the Italian Minister of Innovation.
- Does it have teeth? Where?
- Pope Francis spoke at the Jan. 10, 2023 signing event and criticized the use of artificial intelligence in ways that harm the most vulnerable, specifically those seeking asylum. “It is not acceptable that the decision about someone’s life and future be entrusted to an algorithm,” he said, adding “Every person must be able to enjoy a human and supportive development, without anyone being excluded.”
- Assume you want your work to be consistent with the ACM principles. How if at all would that impact an engineer doing engineering work on Stable Diffusion? Is the answer any different for a lawyer doing legal work for stability.ai (the makers of Stable Diffusion) or other AI companies that might have troubling products?
- How does the recent UN resolution address the ethical issues? Is this a good forum for AI policy-making?
- Biorhane & van Kijk approach the ethical issues humanistically.
- Is this a necessary counterpoint to the religious perspectives or are they, from our point of view, just beating a dead toaster?
- If we get past the attack on the idea of ‘robot rights’ we are left with at least two key concepts:
- Robots/AI are, and can too easily be, used to violate human rights.
- Complex (social and technical) systems tend to have the effect of blurring individual responsibility for the systems’ actions. But people – someone or some group – are still responsible.
- If they’re right about that, does that effect how we should think about projects like the readings above?
- Schiff et al provide a survey of the variety and differences among ethical policies for AI.
- An issue that has many ethicists worried is that the proliferation of ethics policies enables “ethics-washing” in which bad, or grey, actors shop for an ethics policy that bans things they do not do while remaining silent about the (by hypothesis, dubious) things the organization actually does. The actor then trumpets its adherence to the ethics policy, knowing that it actually doesn’t bite where it matters.
- Does the Schiff report inform this concern about proliferating ethics policies? If so, does the report suggest the concern has merit?
- A recent study (optional) found that, “AI ethics work in technology companies is predominantly conducted by individual AI ethics entrepreneurs who struggle to effect change through informal processes.” And, “[e]thics entrepreneurs face three major barriers to their work. First, they struggle to have ethics prioritized in an environment centered around software product launches. Second, ethics are difficult to quantify in a context where company goals are incentivized by metrics. Third, the frequent reorganization of teams makes it difficult to access knowledge and maintain relationships central to their work. Consequently, individuals take on great personal risk when raising ethics issues, especially when they come from marginalized backgrounds.”
- Is regulation needed to right the balance?
- If so, what sort of regulation would be helpful and appropriate?
- Optional related article: Melissa Heikkilä, Responsible AI has a burnout problem: Companies say they want ethical AI. But those working in the field say that ambition comes at their expense, MIT Tech. Rev. (Oct. 28, 2022).
- While most of the readings in this section focus on ethical duties of AI creators, the readings on lawyers’ professional ethics center on the issue of the ethical duties of AI users. Disclosure is obviously a big issue for lawyers and perhaps even more for doctors. Can you think of other general ethical obligations for professionals? For everyone using AI?
Class 21: Governance of AI (General Issues)
- Rishi Bommasani, et al., Considerations for Governing Open Foundation Models, Stanford HAI, (Dec. 2023).
- Neel Guha et al., The AI Regulatory Alignment Problem, Stanford HAI (Nov. 2023). [Note: optional draft of full paper, AI Regulation Has Its Own Alignment Problem: The Technical and Institutional Feasibility of Disclosure, Registration, Licensing, and Auditing, 92 Geo. Wash. L. Rev. 1473 (2024)].
- Section III (pages 583-595) & IV.A (pages 594-600) of Yonathan A. Arbel, Matthew Tokson & Albert Lin, Systemic Regulation of Artificial Intelligence, 56 Ariz. St. L.J. 545 (forthcoming 2024) (Draft Dec. 16, 2023).
- Sylvie Delacroix, Joelle Pineau, and Jessica Montgomery, Democratising the digital revolution: the role of data governance (June 30, 2020) in Reflections on AI for Humanity (Braunschweig & Ghallab eds., 2021).
- Melissa Heikkilä, Our quick guide to the 6 ways we can regulate AI, MIT Tech. Rev. (May 22, 2023).
- Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law [Vilnius, 5.IX.2024], Council of Europe Treaty Series – No [225] (Opened for signature on Sept. 5, 2024). Optional: See the Explanatory Report.
Optional
Attempts at/Analysis of General Policy
- (*) Paul D. Weitzel, AI Governance through Corporate Theory __ Tenn. L. Rev. (forthcoming 2025):
- AI governance is attempting to address problems that corporate theory has grappled with for millennia, and many (though not all) of these corporate solutions are readily translated into AI solutions. The article shows that corporations and AI systems are similar in purpose, nature and the challenges they face. Next, the article tests this proposition by exploring corporate law’s theoretical framework to identify new solutions in AI governance. The article resituates AI governance solutions back into the corporate theory framework to identify untapped, adjacent solutions. It then applies one of these solutions—multitiered governance structures—to AI systems and shows that it improves control and performance over single-agent AI systems. This shows that corporate theory can be helpful in AI governance, which allows the nascent field of AI governance to adopt corporate theory’s centuries worth of experience.
To show the extent of this value, this article considers the AI alignment problem from a corporate theory perspective. It shows that the alignment problem is actually two problems, and these are the most studied issues in corporate theory. First, end-user alignment is an agency cost problem, which has been central to corporate theory since the 1970s. Second, societal-level alignment reflects the same concerns raised in the corporate purpose debate, the largest debate in corporate theory of the last century. In each case, the article applies the lessons learned in corporate theory to the AI governance discussion. The article then concludes with some areas for further research to deepen the connection between these.
- AI governance is attempting to address problems that corporate theory has grappled with for millennia, and many (though not all) of these corporate solutions are readily translated into AI solutions. The article shows that corporations and AI systems are similar in purpose, nature and the challenges they face. Next, the article tests this proposition by exploring corporate law’s theoretical framework to identify new solutions in AI governance. The article resituates AI governance solutions back into the corporate theory framework to identify untapped, adjacent solutions. It then applies one of these solutions—multitiered governance structures—to AI systems and shows that it improves control and performance over single-agent AI systems. This shows that corporate theory can be helpful in AI governance, which allows the nascent field of AI governance to adopt corporate theory’s centuries worth of experience.
- (*)Nicholas A. Caputo, Alignment as Jurisprudence, _ Yale J. L. & Tech. (forthcoming 2025):
- Jurisprudence, the study of how judges should properly decide cases, and alignment, the science of getting AI models to conform to human values, have the same fundamental structure. These seemingly distant fields share an objective, to predict and shape how decisions by powerful actors, in one field judges and in the other increasingly powerful artificial intelligences, will be made in the unknown future. And they use the same tools of specification, rules and cases, to try to accomplish that goal. Thus, rather than thinking of AI models only as aids to judges or focusing on how AI affects specific doctrinal areas like copyright or free speech, as the bulk of post-ChatGPT legal scholarship has done, it is more fruitful to think of models as actually like judges who are taking on an increasing variety of essential adjudicatory and decisionmaking roles in society. The great debates of jurisprudence, about what the law is and what it should be, can provide insight into alignment, and lessons from what works and what does not in alignment can help make progress in jurisprudence.
This essay puts the two fields directly into conversation, illuminating the fundamental similarities between law and AI and pointing to ways in which each field can improve the other. Drawing on leading accounts of jurisprudence, particularly Dworkin’s principle-oriented interpretivism and Sunstein’s positivist account of law as analogical reasoning, and on cutting-edge alignment approaches, namely Constitutional AI and case-based reasoning, it illustrates the value of a more sophisticated legally-inspired approach to the interplay of rules and cases in finetuning alignment and points to ways that AI can provide a better understanding of how judges make decisions. As AI continues to increase in capacity, and as human judges seem to feel increasingly unconstrained in their exercise of power, the conversation between these two fields will become increasingly essential and may point to a better version of both.
- Jurisprudence, the study of how judges should properly decide cases, and alignment, the science of getting AI models to conform to human values, have the same fundamental structure. These seemingly distant fields share an objective, to predict and shape how decisions by powerful actors, in one field judges and in the other increasingly powerful artificial intelligences, will be made in the unknown future. And they use the same tools of specification, rules and cases, to try to accomplish that goal. Thus, rather than thinking of AI models only as aids to judges or focusing on how AI affects specific doctrinal areas like copyright or free speech, as the bulk of post-ChatGPT legal scholarship has done, it is more fruitful to think of models as actually like judges who are taking on an increasing variety of essential adjudicatory and decisionmaking roles in society. The great debates of jurisprudence, about what the law is and what it should be, can provide insight into alignment, and lessons from what works and what does not in alignment can help make progress in jurisprudence.
- Manish Singh, India reverses AI stance, requires government approval for model launches, TechCrunch (Mar. 3, 2024).
- A later statement by Union Minister of State for Electronics and Technology Rajeev Chandrasekhar “clarified” that this new rule is applicable to “large platforms” and not for start-ups, [Optional: News Story.]
- Matthijs M. Maas, Advanced AI Governance: A Literature Review of Problems, Options, and Proposals (Nov 17, 2023):
- [T]his literature review provides an updated overview and taxonomy of research in advanced AI governance.
After briefly setting out the aims, scope, and limits of this project, it reviews three major lines of work: (I) problem-clarifying research aimed at understanding the challenges advanced AI poses for governance, by mapping the strategic parameters (technical, deployment, governance) around its development, and by deriving indirect guidance from history, models, or theory; (II) option-identifying work aimed at understanding affordances for governing these problems, by mapping potential key actors, their levers of governance over AI, and pathways to influence whether or how these are utilized; (III) prescriptive work aimed at identifying priorities and articulating concrete proposals for advanced AI policy, on the basis of certain views of the problem and governance options. The aim is that, by collecting and organizing the existing literature, this review helps contribute to greater analytical and strategic clarity, enabling more focused and productive research, public debate and policymaking on the critical challenges of advanced AI.
- [T]his literature review provides an updated overview and taxonomy of research in advanced AI governance.
- Ministers of the Global Partnership on Artificial Intelligence (GPAI), 2023 Ministerial Declaration, GPAI(2023)2 (Dec. 13, 2023). Somewhat ironic in light of 3/24 TechCrunch article above?
- Microsoft, Governing AI: A Blueprint for the future (May 24, 2023). Offers an extensive five-point blueprint for the public governance of AI. For a libertarian critique, see Adam Thierer, Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control, Medium (May 29, 2023).
- Pin-Yu Chen, Cars Require Regular Inspection, Why Should AI Models Be any Different?, Technology Networks (Mar. 14, 2022). “[A]re we paying enough efforts, as seriously as to our cars, to inspect and certify the trustworthiness of …AI-based systems and algorithms? Moreover, as an end user and a consumer, do we really know how and why AI technology is making decisions, and how robust AI technology is to adversarial attacks?”
- Ben Green, The Flaws of Policies Requiring Human Oversight of Government Algorithms (September 10, 2021):
- “I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security…”
- Nicolas Petit, and Jerome, De Cooman, Models of Law and Regulation for AI, Robert Schuman Center for Advanced Studies (2020):
- “The discussion focuses on four models: the black letter model, the emergent model, the ethical model, and the risk regulation model. All four models currently inform, individually or jointly, integrally or partially, consciously or unconsciously, law and regulatory reform towards AI. We describe each model’s strengths and weaknesses, discuss whether technological evolution deserves to be accompanied by existing or new laws, and propose a fifth model based on externalities with a moral twist.”
- Law Commission of Ontario, Regulating AI: Critical Issues and Choices (April, 2021). Very thorough paper calling existing Canadian law inadequate, and offering extensive suggestions for reform.
- Jennifer Chandler, The Autonomy of Technology: Do Courts Control Technology or Do They Just Legitimize Its Social Acceptance?, 27 Bull. of Sci. Tech. & Soc. 339 (2007). Argues that often principles “support and legitimize novel technologies.”
- Aj Grotto & James Dempsey, Vulnerability Disclosure and Management for AI/ML Systems: A Working Paper with Policy Recommendations (Nov. 15. 2021):
- “Artificial intelligence systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attacks that involve evasion, data poisoning, model replication, and exploitation of traditional software flaws to deceive, manipulate, compromise, and render them ineffective. Yet too many organizations adopting AI/ML systems are oblivious to their vulnerabilities. Applying the cybersecurity policies of vulnerability disclosure and management to AI/ML can heighten appreciation of the technologies’ vulnerabilities in real-world contexts and inform strategies to manage cybersecurity risk associated with AI/ML systems. Federal policies and programs to improve cybersecurity should expressly address th e unique vulnerabilities of AI-based systems, and policies and structures under development for AI governance should expressly include a cybersecurity component.”
- (Alessandro Mantelero, Regulating AI in Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI 139 (2022). This is an exposition of a “principles-based approach” that is contrasted with the EU’s risk-based approach.
- Ian Ayres & Jack M. Balkin, The Law of AI is the Law of Risky Agents Without Intentions, U. Chi. L. Rev. Online (Nov. 27, 2024):
- A recurrent problem in adapting law to artificial intelligence (AI) programs is how the law should regulate the use of entities that lack intentions. Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain intention or mens rea. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability.
We think that the best solution is to employ objective standards that are familiar in many different parts of the law. These legal standards either ascribe intention to actors or hold them to objective standards of conduct.
Of course, the AI programs themselves are not the responsible actors; instead, they are technologies used by human beings that have effects on other human beings. Therefore, the real question of legal obligation is who should be held responsible for the use of AI and under what conditions.
- A recurrent problem in adapting law to artificial intelligence (AI) programs is how the law should regulate the use of entities that lack intentions. Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain intention or mens rea. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability.
- (*) Jess Whittlestone, Kai Arulkumaran & Matthew Crosby, Societal Implications of Deep Reinforcement Learning, 70 J. Artificial Intelligence Research 1003 (2021):
- Deep Reinforcement Learning (DRL) is an avenue of research in Artificial Intelligence (AI) that has received increasing attention within the research community in recent years, and is beginning to show potential for real-world application. DRL is one of the most promising routes towards developing more autonomous AI systems that interact with and take actions in complex real-world environments, and can more flexibly solve a range of problems for which we may not be able to precisely specify a correct ‘answer’. This could have substantial implications for people’s lives: for example by speeding up automation in various sectors, changing the nature and potential harms of online influence, or introducing new safety risks in physical infrastructure. In this paper, we review recent progress in DRL, discuss how this may introduce novel and pressing issues for society, ethics, and governance, and highlight important avenues for future research to better understand DRL’s societal implications.
Specific Policy Goals
- (*) Philipp Hacker, Sustainable AI Regulation (Draft March 6, 2024):
- This paper addresses a critical gap in the current AI regulatory discourse by focusing on the environmental sustainability of AI and technology, a topic often overlooked both in environmental law and in technology regulation, such as the GDPR or the EU AI Act. Recognizing AI’s significant impact on climate change and its substantial water consumption, especially in large generative models like ChatGPT, GPT-4, or Gemini, the paper aims to integrate sustainability considerations into technology regulation, in three steps. First, while current EU environmental law does not directly address these issues, there is potential to reinterpret existing legislation, such as the GDPR, to support sustainability goals. Counterintuitively, the paper argues that this also implies the need to balance individual rights, such as the right to erasure, with collective environmental interests.Second, cased on an analysis of current law, and the proposed EU AI Act, the article suggests a suite of policy measures to align AI and technology regulation with environmental sustainability. They extend beyond mere transparency mechanisms, such as disclosing GHG footprints, to include a mix of strategies like co-regulation, sustainability by design, restrictions on training data, and consumption caps, potentially integrating AI and technology more broadly into the EU Emissions Trading Regime. Third, this regulatory toolkit could serve as a blueprint for other technologies with high environmental impacts, such as blockchain and Metaverse applications. The aim is to establish a comprehensive framework that addresses the dual fundamental societal transformations of digitisation and climate change mitigation.
- (*) Joana Varon et al., Coding Rights, AI Commons: nourishing alternatives to Big Tech monoculture (2024):
- How can we escape the current context of AI development where certain power forces are pushing for models that, ultimately, automate inequalities and threaten socio-enviromental diversities? What if we could redefine AI? What if we could shift its production from a capitalist model to a more disruptive, inclusive, and decentralized one? Can we imagine and foster an AI Commons ecosystem that challenges the current dominant neoliberal logic of an AI arms race? An ecosystem encompassing researchers, developers, and activists who are thinking about AI from decolonial, transfeminist, antiracist, indigenous, decentralized, post-capitalist and/or socio-environmental justice perspectives?
This research is a field scan in which we aimed to understand the (possibly) emerging “AI Commons” ecosystem. Although AI Commons is an umbrella term we use for post-capitalist alternatives to AI development, we found multiple, sometimes overlapping, sometimes competing, communities of practice and prominent individuals that are focused on critiquing, safeguarding, improving, imagining, and/or developing alternatives to the current ‘default settings’ of AI as a tool to advance the matrix of domination (capitalism, white supremacy, patriarchy, and settler colonialism).
… {W]e found powerful communities of practice, groups, and organizations producing nuanced criticism of the Big Tech-driven AI development ecosystem and, most importantly, imagining, developing, and, at times, deploying an alternative AI technology that’s informed and guided by the principles of decoloniality, feminism, antiracist, and post-capitalist AI systems. However, there’s a chasm between imagining, criticizing, and developing alternative AI Systems. We see this as a window of opportunity. In a context where AI systems are developed through a pipeline of extraction of bodies, land, and data, we collectively map possible allies to envision alternatives. Therefore, this study shed light on a group of actors whose activities could be further connected and supported towards co-designing an alternative pipeline for AI development. It provides recommendations to envision what possible AI technologies developed prioritizing the ethos of “buen vivir,” care of humans, of all living beings, and the environment towards enhancing collective good could potentially look like.
- How can we escape the current context of AI development where certain power forces are pushing for models that, ultimately, automate inequalities and threaten socio-enviromental diversities? What if we could redefine AI? What if we could shift its production from a capitalist model to a more disruptive, inclusive, and decentralized one? Can we imagine and foster an AI Commons ecosystem that challenges the current dominant neoliberal logic of an AI arms race? An ecosystem encompassing researchers, developers, and activists who are thinking about AI from decolonial, transfeminist, antiracist, indigenous, decentralized, post-capitalist and/or socio-environmental justice perspectives?
- U.S. Dept. of State, Bureau of cyberspace and Digital Policy, Risk Management Profile for Artificial Intelligence and Human Rights [Official archived HTML] [Saved local PDF version in case it disappears] (July 25, 2025):
- [Note: Biden Admin policy] The U.S. Department of State is releasing a “Risk Management Profile for Artificial Intelligence and Human Rights” (the “Profile”) as a practical guide for organizations—including governments, the private sector, and civil society—to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.[1] When used in a rights-respecting manner, AI can propel technological advances that benefit societies and individuals, including by facilitating enjoyment of human rights. However, AI can be applied in ways that infringe on human rights unintentionally, such as through biased or inaccurate outputs from AI models. AI can also be intentionally misused to infringe on human rights, such as for mass surveillance and censorship. International human rights are uniquely valuable to AI risk management because they provide an internationally recognized, universally applicable normative basis for assessing the impacts of technology. However, human rights are not always familiar to those involved in AI design, development, deployment, and use, and there is a gap in translating human rights concepts for technologists.The Profile aims to bridge the gap between human rights and risk management approaches, demonstrating how actions related to assessing, addressing, and mitigating human rights risks fit naturally into other risk management practices.
- Aubra Anthony, Lakshmee Sharma, and Elina Noor, Carnegie Endowment for International Peace, Advancing a More Global Agenda for Trustworthy Artificial Intelligence (April 30, 2024):
- International AI governance efforts often prescribe what are deemed “universal” principles for AI to adhere to, such as being “trustworthy” or “human-centered.” However, these notions encode contexts and assumptions that originate in the more well-resourced Global North. This affects how AI models are trained and presupposes who AI systems are meant to benefit, assuming a Global North template will prove universal. Unsurprisingly, when the perspectives and priorities of those beyond the Global North fail to feature in how AI systems and AI governance are constructed, trust in AI falters. In the 2021 World Risk Poll by the Lloyd’s Register Foundation, distrust in AI was highest for those in lower-income countries. As AI developers and policymakers seek to establish more uniformly beneficial outcomes, AI governance must adapt to better account for the range of harms AI incurs globally.
[…] We offer three common yet insufficiently foregrounded themes for this purpose: 1, Consumers versus producers […] 2. Technical versus sociotechnical framings [,,,] 3. Principle versus practice: Commonly promoted AI principles, like openness and explainability, begin with assumptions around levels of access or agency unique to the Global North. These assumptions often break down beyond their envisioned contexts, inhibiting and sometimes even undermining the principle’s translation into practice.
- International AI governance efforts often prescribe what are deemed “universal” principles for AI to adhere to, such as being “trustworthy” or “human-centered.” However, these notions encode contexts and assumptions that originate in the more well-resourced Global North. This affects how AI models are trained and presupposes who AI systems are meant to benefit, assuming a Global North template will prove universal. Unsurprisingly, when the perspectives and priorities of those beyond the Global North fail to feature in how AI systems and AI governance are constructed, trust in AI falters. In the 2021 World Risk Poll by the Lloyd’s Register Foundation, distrust in AI was highest for those in lower-income countries. As AI developers and policymakers seek to establish more uniformly beneficial outcomes, AI governance must adapt to better account for the range of harms AI incurs globally.
- Richard D. Taylor, Saving Global Human Rights: A “Global South AI” Strategy (Feb. 5, 2004).
- Gelan Ayana et al., Decolonizing global AI governance: Assessment of the state of decolonized AI governance in Sub-Saharan Africa (Dec. 4, 2023):
- This research evaluates Sub-Saharan African progress in AI governance decolonization, focusing on indicators like AI governance institutions, national strategies, sovereignty prioritization, data protection regulations, and adherence to local data usage requirements. Results show limited progress, with only Rwanda notably responsive to decolonization. 80% of countries are “decolonization-aware,” and one is “decolonization-blind.” The paper provides a detailed analysis of each nation, offering recommendations for fostering decolonization, including stakeholder involvement, addressing inequalities, promoting ethical AI, supporting local innovation, building regional partnerships, capacity building, public awareness, and inclusive governance. The contribution of this work lies in elucidating the challenges and opportunities associated with decolonization in SSA countries, thereby enriching the ongoing discourse on global AI governance.
- Tiffany C. Li, Ending the AI Race: Regulatory Collaboration as Critical Counter-Narrative, 69 Vill. L. Rev. 981 (2025):
- The future of artificial intelligence is not a zero-sum game—or, at least, it does not have to be. The AI Race narrative emphasizes that states must engage in zero-sum thinking to control the future by out-competing other states in developing and harnessing AI. However, not only is the AI Race narrative inaccurate, but so is what may be a burgeoning counter-narrative: the AI Ethics Race. The narrative of the AI Ethics Race utilizes the same zero-sum logic of the AI Race, but instead of one state besting all others through technological development, the state who is able to lead in developing and harnessing AI ethics regulation becomes the winner.
Both the AI Race and the AI Ethics Race are inaccurate and dangerous narratives. In reality, the future of AI will not be determined by any one state on its own; rather, states must work together and collaborate on AI ethics development. Thus, in this Article, I offer regulatory collaboration as a critique of current models of international cooperation as well as competition. Through the case study of AI, this Article offers the novel framing of regulatory collaboration as an alternative model for AI ethics governance and international law.
- The future of artificial intelligence is not a zero-sum game—or, at least, it does not have to be. The AI Race narrative emphasizes that states must engage in zero-sum thinking to control the future by out-competing other states in developing and harnessing AI. However, not only is the AI Race narrative inaccurate, but so is what may be a burgeoning counter-narrative: the AI Ethics Race. The narrative of the AI Ethics Race utilizes the same zero-sum logic of the AI Race, but instead of one state besting all others through technological development, the state who is able to lead in developing and harnessing AI ethics regulation becomes the winner.
- (*) Sylvia Lu, Regulating Algorithmic Harms, __ Fl. L. Rev. __ (forthcoming 2025):
- This Article constructs a legal typology to categorize [algorithmic] harms. It argues that there are four primary types of algorithmic harms: eroding privacy, undermining autonomy, diminishing equality, and impairing safety. Additionally, it identifies two aggravating factors—accountability paucity and algorithmic opacity—that cause these seemingly minor harms to escalate into significant problems by obstructing harm detection and correction. This Article then conducts case studies of relevant legal frameworks in the United States, the European Union, and Japan to assess the effectiveness of existing responses to algorithmic harms. The case studies reveal that these regulatory examples are insufficient; they either overlook certain types of harms or fail to consider their cumulative effects, thereby allowing problematic AI practices to circumvent legal obligations.Drawing on these findings, this Article proposes three legal interventions to address algorithmic harms, each aims to mitigate primary harms by targeting aggravating factors. Refined harm-centric algorithmic impact assessments, which impose an obligation on AI developers to address the compounded harms, serve as a starting point for enhancing algorithmic accountability. While these assessments often have a collective focus and overlook individual differences, individual rights in terms of algorithmic systems provide enhanced control over AI applications that could lead to aggregated primary harms. The success of these tools relies on a set of disclosure duties designed to reduce algorithmic opacity in favor of increased harm awareness, especially in situations where AI use is associated with intangible yet far-reaching harms. Taken altogether, this harm-centric procedural approach advances the conversation about the legal definition of algorithmic harms, the boundaries of AI law, and viable approaches to effective algorithmic governance.
Specific Use Cases
- United Nations Working Party on Regulatory Cooperation and Standardization Policies (WP.6), The regulatory compliance of products with embedded artificial intelligence or other digital technologies (2023).
- Noam Kolt, Governing AI Agents, 101 Notre Dame L. Rev. (forthcoming 2025):
- Companies that pioneered the development of language models have now built AI agents that can independently navigate the internet, perform a wide range of online tasks, and increasingly serve as AI personal assistants and virtual coworkers. The opportunities presented by this new technology are tremendous, as are the associated risks. Fortunately, there exist robust analytic frameworks for confronting many of these challenges, namely, the economic theory of principal-agent problems and the common law doctrine of agency relationships. Drawing on these frameworks, this Article makes three contributions. First, it uses agency law and theory to identify and characterize problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty. Second, it illustrates the limitations of conventional solutions to agency problems: incentive design, monitoring, and enforcement might not be effective for governing AI agents that make uninterpretable decisions and operate at unprecedented speed and scale. Third, the Article explores the implications of agency law and theory for designing and regulating AI agents, arguing that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
Regulatory Techniques
- (*) Andrew Selbst, An Institutional View of Algorithmic Impact Assesssments, 35 Harv. J.L. & Tech 117 (2021):
- “An AIA regulation has two main goals: (1) to require firms to consider social impacts early and work to mitigate them before development, and (2) to create documentation of decisions and testing that can support future policy-learning. The Article argues that institutional logics, such as liability avoidance and the profit motive, will render the first goal difficult to fully achieve in the short term because the practical discretion that firms have allows them room to undermine the AIA requirements. But AIAs can still be beneficial because the second goal does not require full compliance to be successful.”
- Susan Ariel Aaronson, The Age of AI Nationalism and Its Effects, Centre for International Governance Innovation (Sept. 2024):
- Instead of working collaboratively to develop AI, many countries have adopted AI industrial policies. Policy makers are working to nurture sovereign AI. However, some nations are acting in ways that — with or without direct intent — discriminate among foreign market actors. …. AI nationalist policies in one country can make it harder for firms in another country to develop AI. If officials can limit access to key components of the AI supply chain, such as data, capital, expertise or computing power, they may be able to limit the AI prowess of competitors in country Y and/or Z. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. AI nationalism may seem appropriate given the import of AI, but this paper1 aims to illuminate how AI nationalistic policies may backfire.
- (*) Girish Sastry et al,. Computing Power and the Governance of Artificial Intelligence (Feb. 2024):
- Computing power, or “compute,” is crucial for the development and deployment of artificial intelligence (AI) capabilities. As a result, governments and companies have started to leverage compute as a means to govern AI. For example, governments are investing in domestic compute capacity, controlling the flow of compute to competing countries, and subsidizing compute access to certain sectors. However, these efforts only scratch the surface of how compute can be used to govern AI development and deployment. Relative to other key inputs to AI (data and algorithms), AI-relevant compute is a particularly effective point of intervention: it is detectable, excludable, and quantifiable, and is produced via an extremely concentrated supply chain. These characteristics, alongside the singular importance of compute for cutting-edge AI models, suggest that governing compute can contribute to achieving common policy objectives, such as ensuring the safety and beneficial use of AI. More precisely, policymakers could use compute to facilitate regulatory visibility of AI, allocate resources to promote beneficial outcomes, and enforce restrictions against irresponsible or malicious AI development and usage. However, while compute-based policies and technologies have the potential to assist in these areas, there is significant variation in their readiness for implementation. Some ideas are currently being piloted, while others are hindered by the need for fundamental research. Furthermore, naïve or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power. We end by suggesting guardrails to minimize these risks from compute governance.
- (*) Justin Bullock, A Global AGI Agency Proposal (Jan. 10, 2025):
- In this report, I argue that there are significant plausible benefits to creating an Open Agency type Artificial General Intelligence (AGI) and having a Global AGI Agency as its primary controller, but that there are also immense challenges to these significant benefits. After a brief exploration of the history of digital computation and global governance, I begin with an examination of two AGI types: Unitary Agent and Open Agency. Then, I provide a selected literature review of approaches to AI and AGI governance. The AGI governance literature is sparse, but a few AGI governance proposals are discussed. Next, I examine a proposed scenario in which the world has created a single AGI and that AGI is governed, in large part, by a Global AGI Agency. The Global AGI Agency which has four proposed core elements: (1) an institutional framework involving joint support from the UN and IEEE and from a coalition of national governments and private sector companies, (2) global collaboration and integration across key AI powers and companies, (3) the Open Agency AGI model itself, organized around principles of structured transparency, and (4) mechanisms for democratic accountability and access. Following the discussion of the Global AGI Agency proposal, I describe 10 challenges for this proposal. These 10 challenges include: international cooperation, centralization of power, innovation and competition concerns, private sector resistance, representation and fairness, complexity of governance, technical challenges of the Open Agency AGI model, democratic accountability limitations, security and information risks, and adaptability and future-proofing concerns. In the conclusion I reconsider the key points from the report and remind the reader of the important challenge of good AGI governance.
- Akash R. Wasil, et al., Governing dual-use technologies: Case studies of international security agreements and lessons for AI governance (Sept. 4, 2024):
- International AI governance agreements and institutions may play an important role in reducing global security risks from advanced AI. To inform the design of such agreements and institutions, we conducted case studies of historical and contemporary international security agreements. We focused specifically on those arrangements around dual-use technologies, examining agreements in nuclear security, chemical weapons, biosecurity, and export controls. For each agreement, we examined four key areas: (a) purpose, (b) core powers, (c) governance structure, and (d) instances of non-compliance. From these case studies, we extracted lessons for the design of international AI agreements and governance institutions. We discuss the importance of robust verification methods, strategies for balancing power between nations, mechanisms for adapting to rapid technological change, approaches to managing trade-offs between transparency and security, incentives for participation, and effective enforcement mechanisms.
- Kevin Wei et al., RAND, How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance, 2024 AAAI/ACM Conference on AI, Ethics, and Society (Jan. 12, 2025):
- Industry actors in the United States have gained extensive influence in conversations about the regulation of general-purpose artificial intelligence (AI) systems. Although industry participation is an important part of the policy process, it can also cause regulatory capture, whereby industry co-opts regulatory regimes to prioritize private over public welfare. Capture of AI policy by AI developers and deployers could hinder such regulatory goals as ensuring the safety, fairness, beneficence, transparency, or innovation of general-purpose AI systems. In this paper, we first introduce different models of regulatory capture from the social science literature. We then present results from interviews with 17 AI policy experts on what policy outcomes could compose regulatory capture in US AI policy, which AI industry actors are influencing the policy process, and whether and how AI industry actors attempt to achieve outcomes of regulatory capture. Experts were primarily concerned with capture leading to a lack of AI regulation, weak regulation, or regulation that over-emphasizes certain policy goals over others. Experts most commonly identified agenda-setting (15 of 17 interviews), advocacy (13), academic capture (10), information management (9), cultural capture through status (7), and media capture (7) as channels for industry influence. To mitigate these particular forms of industry influence, we recommend systemic changes in developing technical expertise in government and civil society, independent funding streams for the AI ecosystem, increased transparency and ethics requirements, greater civil society access to policy, and various procedural safeguards.
Against Governance/ Critiques of Existing Attempts
- John M. Yun, The Folly of AI Regulation, in Artificial Intelligence and Competition Policy (Alden Abbott, Thibault Schrepel eds. 2024):
- The explosive growth of AI related technology has drawn the attention of government authorities around the globe. As these authorities consider various regulatory proposals, this chapter advocates a model similar to the one used when the internet first emerged, that is, a relatively restrained approach to regulation. This position is founded on several core tenets. First, there can be trade-offs between technological growth rates and addressing specific harms. Thus, even if a regulation is ultimately successful in addressing a specific harm, if it dampens the rate of innovation, then this could lead to a net welfare loss. Second, premature regulatory solutions can crowd out market-based solutions, which may offer more efficient solutions to emergent harms. Finally, premature regulations can have the consequence of entrenching incumbents and raising barriers to entry, which, perversely, harms the competitive process rather than promoting it. Importantly, this proposal is not a call to ignore the dangers that AI generated output can pose – nor is it a call for a “more permissive” treatment of AI under existing laws or existing regulatory schemes of general application.
- Alicia Solow-Niederman, Can AI Standards Have Politics?, 71 UCLA L. Rev. Disc. 2 (2023):
- How to govern a technology like artificial intelligence (AI)? When it comes to designing and deploying fair, ethical, and safe AI systems, standards are a tempting answer. By establishing the best way of doing something, standards might seem to provide plug-and-play guardrails for AI systems that avoid the costs of formal legal intervention. AI standards are all the more tantalizing because they seem to provide a neutral, objective way to proceed in a normatively contested space. But this vision of AI standards blinks a practical reality. Standards do not appear out of thin air. They are constructed. This Essay analyzes three concrete examples from the European Union, China, and the United States to underscore how standards are neither objective nor neutral. It thereby exposes an inconvenient truth for AI governance: Standards have politics, and yet recognizing that standards are crafted by actors who make normative choices in particular institutional contexts, subject to political and economic incentives and constraints, may undermine the functional utility of standards as soft law regulatory instruments that can set forth a single, best formula to disseminate across contexts.
- Daniel Wilf-Townsend, The Deletion Remedy, 103 North Carolina Law Review __ (forthcoming 2025):
- A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms.
But, this article argues, model deletion has a serious flaw. In its current form, it has the possibility of being a grossly disproportionate penalty. Model deletion requires the destruction of models whose training included illicit data in any degree, with no consideration of how much (or even whether) that data contributed to any wrongful gains or ongoing harms. Model deletion could thereby cause unjust losses in litigation and chill useful technologies.
- A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms.
- (*) Mark Lemley & Peter Henderson, The Mirage of Artificial Intelligence Terms of Use Restrictions (Jan 10, 2025):
- Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. The California AI Transparency Act even codifies this approach, mandating certain responsible use terms to accompany models.
But are these terms truly meaningful, or merely a mirage? There are myriad examples where these broad terms are regularly and repeatedly violated. Yet except for some account suspensions on platforms, no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief. This is likely for good reason: we think that the legal enforceability of these licenses is questionable. This Article provides a systematic assessment of the enforceability of AI model terms of use and offers three contributions.
First, we pinpoint a key problem with these provisions: the artifacts that they protect, namely model weights and model outputs, are largely not copyrightable, making it unclear whether there is even anything to be licensed.
Second, we examine the problems this creates for other enforcement pathways. Recent doctrinal trends in copyright preemption may further undermine state-law claims, while other legal frameworks like the DMCA and CFAA offer limited recourse. And anti-competitive provisions likely fare even worse than responsible use provisions.
Third, we provide recommendations to policymakers considering this private enforcement model. There are compelling reasons for many of these provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist. There are, of course, downsides: model creators have even fewer tools to prevent harmful misuse. But we think the better approach is for statutory provisions, not private fiat, to distinguish between good and bad uses of AI and restrict the latter. And, overall, policymakers should be cautious about taking these terms at face value before they have faced a legal litmus test.
- Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. The California AI Transparency Act even codifies this approach, mandating certain responsible use terms to accompany models.
- Gian Volpicelli, Inside the United Nations’ AI policy grab, POLITICO (July 18, 2024).
Just Because
- Ioana Bratu, Artificial Intelligence for Future Lunar Societies: A Critical Analysis of the Liability Problem (Dec. 2, 2021):
- “The introduction of AI systems as part of future Lunar habitats does not come without corresponding risks, especially from a legal perspective. Several legal challenges may appear in the context of a high reliance on these systems, such as: who will be liable in case an AI system will be involved in accidents causing economic losses or loss of human lives? What type of legal framework will be required to mitigate such risks? Will the existing body of laws representing international space law remain sufficient for addressing these challenges?”
Notes & Questions
- This class and the next two are about an important problem: how should governments encourage and/or regulate AI in general. (Note that both public and private law might be marshaled to do this.) Different nations have different answers, although the idea of an AI “race” does seem common (see, for example, the optional National Security Commission on Artificial Intelligence, Final Report (2021)).
- Foundation models are so new that regulators and scholars are scrambling to figure out how even rapidly evolving ideas about AI governance — which were until two+ years ago or less heavily centered on ML and maybe image generation — should cope.
- One major potential cleavage in regulatory strategy is between open source and closed source (proprietary) models.
- To what extent do the Considerations for Governing Open Foundation Models paper analysis and suggestions apply/not apply to proprietary foundation models?
- Considerations for Governing Open Foundation claims that open source models “provide significant benefits to society by promoting competition, accelerating innovation, and distributing power…. Further, open models are marked by greater transparency and, thereby, accountability.” Assuming this is true, are there down sides to open source models? Is it true?
- The Regulatory Alignment paper has a number of suggestions. Among them,
- Firms should do voluntary or mandatory “averse event reporting” — tell regulators about bad things that the AI gets used for.
- How, in practice would an ethical AI developer do this?
- How can the developers know what end-users do with the model?
- Sometimes users will complain, when the result is not what they want; but won’t the most malicious users never complain?
- Even if the developers have ‘knowledge’ in the form of lets say complete usage records, how do they analyze these to extract the “adverse event” information?
- How, in practice would an ethical AI developer do this?
- Government should do oversight of third-party auditors to “verify industry claims”.
- Isn’t that great only so long as the developers claim virtue?
- Does it create a perverse incentive?
- Would it work for open source?
- Firms should do voluntary or mandatory “averse event reporting” — tell regulators about bad things that the AI gets used for.
- Arbel et. al examine the form that AI-related ‘risk mitigation’ should take. While Regulatory Alignment advocates sectoral regulation by subject-expert agencies, this paper says “systematic regulation” is needed as
- Some risks are “inherent” to AI.
- Risks are so numerous and complicated, so beyond most agencies
- Government should require pre-approval of AI’s (licensing) in order to be able to address long-term as well as short-term risks. And fore-fronting the long-term risks will justify the regulations.
- Sectoral regulation doesn’t work well for general-purpose AI as they can be used for many different things.
- But even so there will still be a place for sectoral regulators (e.g. SEC, FDA) for things specific to their mission.
- Centralizing regulation will make it easier to keep up with change.
- Are these arguments persuasive? Is there a converse risk that as AI gets built into everything, a super-regulator will have to regulate … everything?
- To what extent are the “AI Governance” issues identified above captured or achievable by “data governance” as Delacroix et al suggest?
- In other words, if we were somehow to figure out great rules about data quality and reasonable non-discriminatory access to training data, how many “AI governance” issues would take care of themselves through the ordinary mechanisms of competition?
- What issues would remain? Can you group them?
- The Council of Europe’s “Framework Convention” has been signed by:
- Signatories to date are: Andorra, Canada, European Union, Georgia, Iceland, Israel, Japan, Liechtenstein, Montenegro, Norway, Republic of Moldova, San Marino, United Kingdom, United States of America.
- This is only a handful of signatories, but they include most of the countries with powerhouse AI industries. Which ones are missing?
- Does the Convention have teeth? Where?
Class 22: Governance of AI (EU)
- EU Artificial Intelligence Act, High-level summary of the AI Act (Feb. 27, 2024). {Note: optional full text at AI Act Explorer — it’s long.]
- EU PR
- European Commission, European approach to artificial intelligence (Feb. 18, 2025).
- European Commission, AI Act (Feb. 18, 2025).
- European Commission, General-Purpose AI Models in the AI Act – Questions & Answers (March 15, 2025).
- AI Act and Generative AI
- Martin Braun et al. , Navigating Generative AI Under the European Unions Artificial Intelligence Act, 42:4 Computer & Internet Lawyer 1 (Apr. 2025).
- Sections II.B and III of Claire Boine & David Rolnick, Why the AI Act Fails to Understand Generative AI(Jan 15, 2025)
- Sample Critiques
- Phillip Hacker, What’s Missing from the EU AI Act: Addressing the Four Key Challenges of Large Language Models, verfassungsblog.(Dec. 13, 2023) .
- Federica Paolucci, Shortcomings of the AI Act: Evaluating the New Standards to Ensure the Effective Protection of Fundamental Rights, verfassungsblog (March 14, 2024).
- Ljubiša Metikoš, The AI Act: Weak, Weaker, Weakest, 2024-3 Mediaforum 73 (June 13, 2024).
- (If interested, see also the powerful Mei & Sag paper in the optional readings.)
- Review Council of Europe’s Framework Convention from Class 21: Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law [Vilnius, 5.IX.2024], Council of Europe Treaty Series – No [225] (Opened for signature on Sept. 5, 2024). Optional: See the Explanatory Report.
Optional
- Comprehensive….but very long. Florence G’sell, An Overview of the European Union Framework Governing Generative AI Models and Systems (May 21, 2024):
- This study is a work in progress examining the legal framework governing generative AI models in the European Union. First, it studies the rules already applicable to generative AI models (GDPR, Copyright Law, Civil Liability, Digital Services Act). Second, it examines the latest version of the AI Act as it was voted by the EU Parliament on March 13, 2024. Lastly, it studies the two Directives dealing with civil liability: the new Product Liability Directive voted on March 12, 2024 and the proposal for an AI Liability Directive.
- (*) European Commission, Draft Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) (Feb. 4, 2025). Note also that even if/when approved these are non-binding guidelines.
- (*) Yiyang Mei & Matthew Sag, The Illusion of Rights-Based AI Regulation (Feb. 27, 2025):
- Whether and how to regulate AI is one of the defining questions of our times—a question that is being debated locally, nationally and internationally. We argue that much of this debate is proceeding on a false premise. Specifically, our article challenges the prevailing academic consensus that the European Union’s AI regulatory framework is fundamentally rights-driven and the correlative presumption that other rights-regarding nations should therefore follow Europe’s lead in AI regulation. Rather than taking rights language in EU rules and regulations at face value, we show how EU AI regulation is the logical outgrowth of a particular cultural, political, and historical context. We show that although instruments like the General Data Protection Regulation (GDPR) and the AI Act invoke the language of fundamental rights, these rights are instrumentalized—used as rhetorical cover for governance tools that address systemic risks and maintain institutional stability. As such, we reject claims that the EU’s regulatory framework and the substance of its rules should be adopted as universal imperatives and transplanted to other liberal democracies. To add weight to our argument from historical context, we conduct a comparative analysis of AI regulation in five contested domains—data privacy, cybersecurity, healthcare, labor, and misinformation. This EU-US comparison shows that the EU’s regulatory architecture is not meaningfully rights-based. Our article’s key intervention in AI policy debates is not to suggest that the current American regulatory model is necessarily preferable but that the presumed legitimacy of the EU’s AI regulatory approach must be abandoned.
- Cheng-chi (Kirin) Chang, The First Global AI Treaty: Analyzing the Framework Convention on Artificial Intelligence and the Eu AI Act, 2024 University of Illinois Law Review (Online) 86 (2024):
- This essay provides a comprehensive analysis of the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, adopted in May 2024. As the world’s first legally binding international treaty on AI, the Convention aims to establish common standards for AI governance grounded in human rights, democratic values, and the rule of law. This essay examines the Convention’s key provisions, comparing them with other regulatory frameworks, particularly the EU’s Artificial Intelligence Act. It highlights the Convention’s broad scope, lifecycle approach to AI governance, flexible implementation mechanisms, and emphasis on stakeholder engagement and international cooperation.The analysis explores the Convention’s strengths, including its global ambition, inclusive drafting process, and ethical foundations. However, it also critically assesses potential limitations, such as challenges in enforcement, possible regulatory fragmentation, and implementation hurdles in the face of political and technological complexities. This essay argues that while the Convention marks a crucial step towards coherent global AI governance, its effectiveness will ultimately depend on addressing these challenges and fostering a global culture of responsible AI development.The essay concludes by offering recommendations for enhancing the Convention’s impact, including developing supplementary protocols, strengthening monitoring mechanisms, and promoting ongoing international dialogue. It emphasizes the need for immediate next steps, such as refining the Convention through global stakeholder engagement, stress testing proposed measures, and expanding research to fill critical knowledge gaps. The Convention’s success will be measured by its ability to guide the responsible development and deployment of AI technologies on a global scale, ensuring they serve to enhance rather than undermine human flourishing and societal well-being.
- (*) European Commission, Third Draft of the General-Purpose AI Code of Practice published, written by independent experts (March 11, 2025)
- See Parts 1-4 linked to from the above. You could write about
- Graham Greenleaf, EU AI Act: Brussels Effect(s) or a Race to the Bottom, 190 Privacy Laws & Business International Report 3 (2024):
- The expression ‘the Brussels effect’ is often used rather loosely to refer to any or all of the ways by which EU legislative standards come to be adopted in the practices of companies (or governments) in countries outside the EU (‘third party countries’).This article considers the EU’s Artificial Intelligence Act (AI Act), and the various ways that it could have four types of ‘Brussels effects’. We need to distinguish: Extra-territorial application; De facto corporate adoption; Legislative emulation by 3rd countries; and Adoption in international agreements and standards.The article argues that evaluation of the EU’s influence on the regulation of AI outside the EU requires all four types of ‘Brussels effect’ to be taken into account, because EU influence can take many forms. The combination of all four versions, if it is effective, is an example of the ‘race to the top’ in multi-jurisdictional regulatory standards. The article concludes that, while it is too early to assess the extent to which the EU AI Act will be another successful example of the Brussels effects, so far, the signs are promising.
- (*) Nathalie A. Smuha & Karen Yeung, The European Union’s AI Act: beyond motherhood and apple pie? (Jun 28, 2024):
- In spring 2024, the European Union formally adopted the AI Act, aimed at creating a comprehensive legal regime to regulate AI systems. In so doing, the Union sought to maintain a harmonized and competitive single market for AI in Europe while demonstrating its commitment to protect core EU values against AI’s adverse effects. In this chapter, we question whether this new regulation will succeed in translating its noble aspirations into meaningful and effective protection for people whose lives are affected by AI systems. By critically examining the proposed conceptual vehicles and regulatory architecture upon which the AI Act relies, we argue there are good reasons for skepticism, as many of its key operative provisions delegate critical regulatory tasks to AI providers themselves, without adequate oversight or redress mechanisms. Despite its laudable intentions, the AI Act may deliver far less than it promises.
- (*) Marco Almada & Nicolas Petit, The EU AI Act a medley of product safety and fundamental rights (Draft Oct, 2023).
- Artificial intelligence (‘AI’) is both a critical driver of economic change and a source of potentially extreme negative externalities. For this reason, two leading AI companies, OpenAI and Anthropic, implemented customised governance structures with the double aim of addressing these externalities while remaining financially attractive to their investors. Other AI companies across the world adopted milder governance safeguards for that purpose. This paper studies these innovative governance frameworks by providing what is to the best of our knowledge the most comprehensive review of AI companies’ various governance structures available to date. It then shows that applicable rules are determinant in shaping companies’ ability to tailor their governance structure to their specific needs and examines the limitations of corporate laws in three European jurisdictions—France, Germany, and Italy—and, to a smaller extent, the US—more specifically, Delaware and Nevada—in enabling flexible governance structures that balance profit motives with public benefit objectives. Finally, it proposes recommendations for creating a new corporate form in the European Union to better support the peculiar needs of AI and other innovative companies, in line with the European Commission’s priorities for the next five years.
- Federico Galli & Claudio Novelli, The Many Meanings of Vulnerability in the AI Act and the One Missing (Dec. 13, 2024):
- This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI interactions, varying in degree based on design choices and modes of interaction. Finally, we show how such a meaning of vulnerability may be incorporated into the AIA by interpreting the concept of “specific social situation” in Article 5 (b).
- Andrés Guadamuz, The EU’s Artificial Intelligence Act and Copyright, __J. World Intellectual Property __ (Forthcoming 2025).
- The EU’s Artificial Intelligence Act, published on July 12, 2024, seeks to establish a consistent legal framework for AI systems within the EU, promoting trustworthy and human-centric AI while safeguarding various fundamental rights. The Act classifies AI applications into three risk categories: unacceptable risk, high risk, General-purpose AI models with systemic risk, and low or no risk, each with corresponding regulatory measures. Although initially not focused on copyright issues, the rise of generative AI led to specific provisions addressing General Purpose AI models (GPAIs). These provisions include transparency obligations, particularly regarding the technical documentation and content used for training AI models, and policies to respect EU copyright laws. The Act aims to balance the interests of copyright holders and AI developers, ensuring compliance while fostering innovation and protecting rights.
- Katerina Demetzou & Vasileios Rovilos, FPF, Conformity Assessments Under the proposed EU AI Act: A Step-By-Step Guide (Nov. 2023).
Notes & Questions
- Why not just set liabilities in the hopes of duly incentivizing market participants.?
- What are the limits of liability-oriented regulatory regimes?
- Are there sectors where they work particularly well or poorly?
- Does the existence of limited liability undermine this mode of regulation
- For minor transgressions (“cost of doing business”)?
- For major transgressions (“see you in bankruptcy court”)?
- How do these liability-based limits compare to risk-based regulation?
- What is the status of facial recognition systems under the EU AI Regulation?
- How about deepfakes?
- The EU AI Regulation may be ambitious in some ways, but commentators quickly attacked it for
- Being over-inclusive;
- Being under-inclusive;
- Having loopholes.
Are these fair critiques?
- Phillip Hacker argues that “The first glaring omission in the AI Act is a comprehensive framework for AI safety for all foundation models, including cybersecurity, mandatory red teaming against illegal content, and content moderation.” The remedy he argues is ” to require a robust, decentralized content moderation system” modeled which elsewhere he says would shift duties “towards LGAIM deployers and users, i.e., those calibrating LGAIMs for and using them in concrete high-risk applications.
- While some general rules, such as data governance, non-discrimination and cybersecurity provisions, should indeed apply to all foundation models (see Section 4), the bulk of the high-risk obligations of the AI Act should be triggered for specific use cases only and target primarily deployers and professional users.” What is more, if the use case is high-risk, “a limited, staged release, coupled with only access for security researchers and selected stakeholders, may often be preferable.”
- Is this realistic? Economically feasible?
- Note that Hacker anticipates the objection, writing, ” Some might argue that these measures could stifle innovation or are too ambitious. However, the rapid development and potential risks of AI technologies necessitate bold steps. Does sensible FM regulation deter innovation? The plain answer is: No. A new study finds that even for quite advanced but not even top-notch 10^24 FLOPs models, such as Bard, ChatGPT etc. (i.e., lower than GPT-4 and Gemini), expected compliance costs only add up to roughly 1% of total development costs. This is a sum that everyone, including smaller European providers … can and should invest in basic industry best practices for AI safety.
- Hacker also questions if open-source models, being more manipulable, are worth the risk. Would banning them be worth the costs? (What are the costs?)
- Can you think of examples of AI manipulation we have read about or discussed that would not be prohibited? Is this a problem?
- How does the Act constrain a bank or financial services provider seeking useful information about the credit risk/worthiness of a potential customer?
- What constrains would the Act impose on an AI offering psychological counseling? On an AI designed to identify persons who might be at risk of suicide?
- How does the “right to an explanation” work if part of the decision is based on AI whose individual actions may not be transparent (“black box”)? Are proponents of rights to explainability demanding what would in effect be a ban on AI for certain applications?
- Many of these issues, and some critiques, invite regulation of end-users instead of concentrating on parties earlier in the production pipeline. Why might that be a good or bad idea? Are there particular obstacles to that approach?
Class 23: Governance of AI (U.S.)
- Pages 7-18 of Jennifer Wang, Stanford HAI & RegLab, Assessing the Implementation of Federal AI Leadership and Compliance Mandates (Jan 2025).
- EO 14179, Removing Barriers to American Leadership in Artificial Intelligence (Jan. 23, 2025)
- Martin J. Mackowski et al., Key Insights on President Trump’s New AI Executive Order and Policy & Regulatory Implications, 15 The National Law Review (No. 41) (Feb. 10, 2025).
- Michael C. Horowitz, Council on Foreign Relations, What to Know About the New Trump Administration Executive Order on Artificial Intelligence (Jan. 24, 2025).
- California
- California Dept. of Justice, Legal Advisory – Application of Existing CA Laws to Artificial Intelligence (Jan. 2025).
- California Dept. of Justice, Final Legal Advisory – Application of Existing CA Laws to Artificial Intelligence in Healthcare (Jan. 2025).
- Pages 2-3 (exec summary) & Sections 1.3, 1.4, 2.2, 3.4, 5.1 & 5.2 of Jennifer Tour Chayes, Mariano-Florentino Cuéllar& Li Fei-Fei, Draft Report of the Joint California Policy Working Group on AI Frontier Models (Mar. 18, 2025)
- Khari Johnson, California Is Considering 30 New AI Regulations. Trump Wants None, The Markup (March 13, 2025).
Optional
- The Road Taken — and then Abandoned
- White House, OSTP, Blueprint for an AI Bill of Rights (Oct. 2023). [Repealed]
- White House, Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023) [Repealed]
- Office of Management and Budget, M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence-1 (March 28, 2024). [Status under review]
- Elisa Jillson, FTC, Aiming for truth, fairness, and equity in your company’s use of AI (Apr. 19, 2021). (No longer on FTC web page)
- (*) NIST, NIST-AI- 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (July 25. 2024) and companion NIST AI Risk Management Framework Playbook.
- “The AI RMF is intended for voluntary use to address risks in the design, development, use, and evaluation of AI products, services, and systems. AI research and development, as well as the standards landscape, is evolving rapidly. For that reason, the AI RMF and its companion documents will evolve over time and reflect new knowledge, awareness, and practices. NIST intends to continue its engagement with stakeholders to keep the Framework up to date with AI trends and reflect experience based on the use of the AI RMF. Ultimately, the AI RMF will be offered in multiple formats, including online versions, to provide maximum flexibility.
“Part 1 of the AI RMF draft explains the motivation for developing and using the Framework, its audience, and the framing of AI risk and trustworthiness.
“Part 2 includes the AI RMF Core and a description of Profiles and their use.”
- “The AI RMF is intended for voluntary use to address risks in the design, development, use, and evaluation of AI products, services, and systems. AI research and development, as well as the standards landscape, is evolving rapidly. For that reason, the AI RMF and its companion documents will evolve over time and reflect new knowledge, awareness, and practices. NIST intends to continue its engagement with stakeholders to keep the Framework up to date with AI trends and reflect experience based on the use of the AI RMF. Ultimately, the AI RMF will be offered in multiple formats, including online versions, to provide maximum flexibility.
- UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and [many international partners] Guidelines for secure AI system development (2023):
- “Artificial intelligence (AI) systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way. Cyber security is a necessary precondition for the safety, resilience, privacy, fairness, efficacy and reliability of AI systems.However, AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.This document recommends guidelines for providers of any systems that use AI, whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.”
- (*) Cameron Averill, Cameron Averill, Algorithmic Reason-Giving, Arbitrary and Capricious Review, and the Need for a Clear Normative Baseline, 93 U. Cin. L. Rev. 40 (2024):
- Opacity compromises reason-giving, a basic pillar of administrative governance. Inadequate reason-giving poses legal problems for agencies because the reasons agencies provide for their decisions form the basis of judicial review. Without adequate reason-giving, agency action will fail arbitrary and capricious review under the Administrative Procedure Act. Inadequate reason-giving poses normative problems, too, since reason-giving promotes quality decision making, fosters accountability, and helps agencies respect parties’ dignitary interests.
This Article considers whether agencies can use algorithms without running afoul of standards, both legal and normative, for reason-giving. It begins by disaggregating algorithmic reason-giving, explaining that algorithmic reason-giving includes both the reasons an agency gives for an algorithm’s design (systemic reason-giving) and the reasons an agency gives for an individual decision when the decision making process involves an algorithm (case-specific reason-giving). This Article then evaluates systemic reason-giving and case-specific reason-giving in turn. Once the normative assessment is complete, this Article considers it implications for arbitrary and capricious review, concluding that at least some algorithms should pass judicial muster. The Article finishes by offering a framework that courts can use when evaluating whether the use of an algorithm is arbitrary and capricious, and that agencies can use to decide whether to create an algorithm in the first place.
- Opacity compromises reason-giving, a basic pillar of administrative governance. Inadequate reason-giving poses legal problems for agencies because the reasons agencies provide for their decisions form the basis of judicial review. Without adequate reason-giving, agency action will fail arbitrary and capricious review under the Administrative Procedure Act. Inadequate reason-giving poses normative problems, too, since reason-giving promotes quality decision making, fosters accountability, and helps agencies respect parties’ dignitary interests.
- (*) Kevin M.K. Fodouop, The Road to Optimal Safety: Crash-Adaptive Regulation of Autonomous Vehicles at the National Highway Traffic Safety Administration, 98 N.Y.U.L. Rev. 1358 (2023):
- Autonomous vehicles are now driving people around in cities from San Francisco to Phoenix. But how to regulate the safety risks from these autonomous driving systems (ADS) remains uncertain. While state tort law has traditionally played a fundamental role in controlling car crash risks, this Note argues that the development of novel data tracking and simulation tools by the ADS industry has led to a regulatory paradigm shift: By leveraging these tools for regulatory analysis, the federal National Highway Traffic Safety Administration (NHTSA) could iteratively adapt and improve its regulatory standards after each crash. While many scholars have advanced proposals for how state products liability can adapt to ADS crashes, this Note is the first to propose such a model of “crash-adaptive regulation” for NHTSA and to show that this model will prove superior to tort liability in controlling ADS crash risks. In presenting this new regulatory model, this Note engages with two rich theoretical debates. First, it compares the efficacy of tort liability and agency regulation in controlling ADS crash risks. Second, it evaluates whether ADS safety standards should be set at the federal level or at the state level. It concludes that ADS’ technical characteristics call for an agency regulatory scheme at the federal level and urges NHTSA to build the technological and operational expertise necessary to operate a crash-adaptive regulatory regime.
- (*) Alicia Solow-Niederman, Do Cases Generate Bad AI Law?, 25 Columb. Sci. & Tech. L. Rev. 261 (2024):
- There’s an AI governance problem, but it’s not (just) the one you think. The problem is that our judicial system is already regulating the deployment of AI systems—yet we are not coding what is happening in the courts as privately driven AI regulation. That’s a mistake. AI lawsuits here and now are determining who gets to seek redress for AI injuries; when and where emerging claims are resolved; what is understood as a cognizable AI harm and what is not, and why that is soThis Essay exposes how our judicial system is regulating AI today and critically assesses the governance stakes. When we do not situate the generative AI cases being decided by today’s human judges as a type of regulation, we fail to consider which emerging tendencies of adjudication about AI are likely to make good or bad AI law. For instance, litigation may do good agenda-setting and deliberative work as well as surface important information about the operation of private AI systems. But adjudication of AI issues can be bad, too, given the risk of overgeneralization from particularized facts; the potential for too much homogeneity in the location of lawsuits and the kinds of litigants; and the existence of fundamental tensions between social concerns and current legal precedents.If we overlook these dynamics, we risk missing a vital lesson: AI governance requires better accounting for the interactive relationship between regulation of AI through the judicial system and more traditional public regulation of AI. Shifting our perspective creates space to consider new AI governance possibilities. For instance, litigation incentives (such as motivations for bringing a lawsuit, or motivations to settle) or the types of remedies available may open up or close down further regulatory development. This shift in perspective also allows us to see how considerations that on their face have nothing to do with AI – such as access to justice measures and the role of judicial minimalism – in fact shape the path of AI regulation through the courts. Today’s AI lawsuits provide an early opportunity to expand AI governance toolkits and to understand AI adjudication and public regulation as complementary regulatory approaches. We should not throw away our shot.
- Carson Ezell & Abraham Loeb, Post-Deployment Regulatory Oversight for General-Purpose Large Language Models (2023):
- “The development and deployment of increasingly capable, general-purpose large language models (LLMs) has led to a wide array of risks and harms from automation that are correlated across sectors and use cases. Effective regulation and oversight of general-purpose AI (GPAI) requires the ability to monitor, investigate, and respond to risks and harms that appear across use cases, as well as hold upstream developers accountable for downstream harms that result from their decisions and practices. We argue that existing processes for sector-specific AI oversight in the U.S. should be complemented by post-deployment oversight to address risks and harms specifically from GPAI usage. We examine oversight processes implemented by other federal agencies as precedents for the GPAI oversight activities that a regulatory agency can conduct. The post-deployment oversight function of a regulatory agency can complement other GPAI-related regulatory functions that federal regulatory agencies may perform which are discussed elsewhere in the literature, including pre-deployment licensing or model evaluations for LLMs.”
- *(*) Andrew D. Selbst & Solon Barocas, Unfair Artificial Intelligence: How FTC Intervention Can Overcome The Limitations Of Discrimination Law, 71 U. Pa. L. Rev. 1023 (2023):
- “[W]e argue that FTC intervention in this space is a positive and overdue development. The Commission can do a lot of good by applying its authority to address unfair and deceptive acts and practices to discriminatory AI. Surprisingly, though the discriminatory harms of AI have been frequently discussed in the last decade of legal literature and scholars have occasionally suggested a possible role for the FTC, there has been no full-length scholarly treatment of the benefits of the Commission’s involvement in regulating discriminatory AI and its legal authority to do so. We provide that treatment here.”
- Lawyers’ Committee for Civil Rights, Online Civil Rights Act (2023).
- FTC, XFTC Launches Inquiry into Generative AI Investments and Partnerships: Agency Issues 6(b) Orders to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.
- Ryan Calo & Danielle Keats Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 Emory L.J. 797 (2021).
- Margot E. Kaminski & Jennifer M. Urban, The Right to Contest AI, 121 Columb. L. Rev 1957 (2021).
- NTIA, AI Accountability policy report.
- Kate Crawford & Jason Schultz, AI Systems as State Actors, 119 Columb. L. Rev. 1941 (2019).
- Andrew Tutt, An FDA for Algorithms, 68 Admin. L. Rev. 83 (2017).
- Huy Roberts et al., Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US, 27 Sci. & Engr. Ethics (2021).
- “This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a ‘Good AI Society’ have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.”
- (*) Bhargavi Ganesh, Stuart Anderson, and Shannon Vallor, If It Ain’t Broke Don’t Fix It: Steamboat Accidents and their Lessons for AI Governance (We Robot 2022 draft). Winner, We Robot 2022 “best paper” award.
- “In this paper, we use the example of steamboat regulation in the 1800’s to challenge latent skepticism regarding the feasibility of governance of AI-driven systems. First, we highlight the constructive nature of US government responses to steamboat accidents, despite the limited governance resources available at the time. Second, we draw parallels between challenges to steamboat and AI governance and situate existing proposals for AI governance in relation to these past efforts. Finally, in noting some of the novel governance challenges posed by AI, we argue that maintaining a historical perspective helps us more precisely target these novelties when generating policy recommendations in our own interdisciplinary research group.”
- Carlos Ignacio Gutierrez Gaviria, The Role of Artificial Intelligence in Pushing the Boundaries of U.S. Regulation: A Systematic Review, 38 Santa Clara High Tech L.J. 123 (2022):
- “[The article] addresses two research questions: 1. What U.S. regulatory gaps exist due to Al methods and applications? 2. When looking across all of the gaps identified in the first research question, what trends and insights emerge that can help stakeholders plan for the future?
“These questions are answered through a systematic review of four academic literature databases in the hard and social sciences. [… which allows it] to effectively characterize regulatory gaps caused by Al in the U.S. In addition, it finds that most gaps: do not require new regulation nor the creation of governance frameworks for their resolution, are found at the federal and state levels of government, and Al applications are recognized more often than methods as their cause.”
- “[The article] addresses two research questions: 1. What U.S. regulatory gaps exist due to Al methods and applications? 2. When looking across all of the gaps identified in the first research question, what trends and insights emerge that can help stakeholders plan for the future?
- (*) W. Nicholson Price II, Distributed Governance of Medical AI, 25 SMU Sci. & Tech. L Rev. 3 (2022):
- Artificial intelligence (AI) promises to bring substantial benefits to medicine. In addition to pushing the frontiers of what is humanly possible, like predicting kidney failure or sepsis before any human can notice, it can democratize expertise beyond the circle of highly specialized practitioners, like letting generalists diagnose diabetic degeneration of the retina. But AI doesn’t always work, and it doesn’t always work for everyone, and it doesn’t always work in every context. AI is likely to behave differently in well-resourced hospitals where it is developed than in poorly resourced frontline health environments where it might well make the biggest difference for patient care. To make the situation even more complicated, AI is unlikely to go through the centralized review and validation process that other medical technologies undergo, like drugs and most medical devices. Even if it did go through those centralized processes, ensuring high-quality performance across a wide variety of settings, including poorly resourced settings, is especially challenging for such centralized mechanisms. What are policymakers to do? This short Essay argues that the diffusion of medical AI, with its many potential benefits, will require policy support for a process of distributed governance, where quality evaluation and oversight take place in the settings of application—but with policy assistance in developing capacities and making that oversight more straightforward to undertake. Getting governance right will not be easy (it never is), but ignoring the issue is likely to leave benefits on the table and patients at risk
- Keio University & Assoc. of Pacific Rim Universities, AI for Social Good (2020). A Japanese perspective on how AI could be used to solve all sorts of problems….
- Ryan Mac et al, Surveillance Nation: Clearview AI Offered Thousands Of Cops Free Trials, Buzzfeed (Apr. 9, 2021).
- H.R.8152 – American Data Privacy and Protection Act (2022).
- (*) Mihailis Diamantis, Vicarious Liability for AI, 99 Ind. L.J. 317 (2023).
- “Algorithms are trainable artifacts with “off” switches, not natural phenomena. They are not people either, as a matter of law or metaphysics. An appealing way out of this dilemma would start by complicating the standard A-harms-B scenario. It would recognize that a third party, C, is usually lurking nearby when an algorithm causes harm, and that third party is a person (legal or natural). By holding third parties vicariously accountable for what their algorithms do, the law could promote efficient incentives for people who develop or deploy algorithms and secure just outcomes for victims. The challenge is to find a model of vicarious liability that is up to the task. “Algorithmic regulation will require federal uniformity, expert judgment, political independence, and pre-market review to prevent – without stifling innovation – the introduction of unacceptably dangerous algorithms into the market This Article proposes that certain classes of new algorithms should not be permitted to be distributed or sold without approval from a government agency designed along the lines of the FDA. This ‘FDA for Algorithms’ would approve certain complex and dangerous algorithms when it could be shown that they would be safe and effective for their intended use and that satisfactory measures would be taken to prevent their harmful misuse. Lastly, this Article proposes that the agency should serve as a centralized expert regulator that develops guidance, standards, and expertise in partnership with industry to strike a balance between innovation and safety.”
- Bridget A. Fahey, Data Federalism, 135 Harv. L. Rev. 1007 (2022):
- “Private markets for individual data have received significant and sustained attention in recent years. But data markets are not for the private sector alone. In the public sector, the federal government, states, and cities gather data no less intimate and on a scale no less profound. And our governments have realized what corporations have: It is often easier to obtain data about their constituents from one another than to collect it directly. As in the private sector, these exchanges have multiplied the data available to every level of government for a wide range of purposes, complicated data governance, and created a new source of power, leverage, and currency between governments.
“This Article provides an account of this vast and rapidly expanding intergovernmental marketplace in individual data. In areas ranging from policing and national security to immigration and public benefits to election management and public health, our governments exchange data both by engaging in individual transactions and by establishing “data pools” to aggregate the information they each have and diffuse access across governments. Understanding the breadth of this distinctly modern practice of data federalism has descriptive, doctrinal, and normative implications.
“In contrast to conventional cooperative federalism programs, Congress has largely declined to structure and regulate intergovernmental data exchange. And in Congress’s absence, our governments have developed unorthodox cross-governmental administrative institutions to manage data flows and oversee data pools, and these sprawling, unwieldy institutions are as important as the usual cooperative initiatives to which federalism scholarship typically attends.
“Data exchanges can also go wrong, and courts are not prepared to navigate the ways that data is both at risk of being commandeered and ripe for use as coercive leverage. I argue that these constitutional doctrines can and should be adapted to police the exchange of data. I finally place data federalism in normative frame and argue that data is a form of governmental power so unlike the paradigmatic ones our federalism is believed to distribute that it has the potential to unsettle federalism in both function and theory.”
- “Private markets for individual data have received significant and sustained attention in recent years. But data markets are not for the private sector alone. In the public sector, the federal government, states, and cities gather data no less intimate and on a scale no less profound. And our governments have realized what corporations have: It is often easier to obtain data about their constituents from one another than to collect it directly. As in the private sector, these exchanges have multiplied the data available to every level of government for a wide range of purposes, complicated data governance, and created a new source of power, leverage, and currency between governments.
- Frank Pasquale, Data-Informed Duties in AI Development, 119 Columb. L. Rev. 1917 (2019).
- The National Artificial Intelligence (AI) Initiative Act can be found under “DIVISION E–NATIONAL ARTIFICIAL INTELLIGENCE INITIATIVE ACT OF 2020” in the final text of the NDAA. Congress passed it as part of a Defense Appropriation Act that was initially vetoed by President Trump. It is instructive to compare this statute with the EU draft above — they take very different approaches to AI!
- Parts IV-VI of Michael Spiro, The FTC and AI Governance A Regulatory Proposal, 10 Seattle J. Tech, Env. & Innovation L. 26 (2020).
- California Executive Order N-12-23.
- NYC Regulatory Attempts
- UPDATE: James Barron, How New York Is Regulating A.I., NY Times (June 22, 2023).
- NYC Law in relation to automated decision systems used by agencies (Jan 11, 2018)
- Rebecca Heilweil, New York City couldn’t pry open its own black box algorithms. So now what?, Vox (Dec. 18, 2019)
- Artificial Lawyer, Bias In Recruitment Software To Be ‘Illegal’ in New York, Vendors Will Need Bias Audit (March 12, 2020).
- Executive Order on AI: “Maintaining American Leadership in Artificial Intelligence” (Feb. 14, 2019) (Trump administration).
- Ben Winters, EPIC, Playing Both Sides: Impact of Tech Industry on Early Federal AI Policy (Apr. 1, 2022):
- “The current approach best reflects the desired benefits of [Google CEO Eric] Schmidt and others that are instrumental in guiding policy, while directly benefiting from it. Congress and federal agencies must allocate additional funding and resources to AI accountability so there is not a reliance on outside groups with clear conflicts of interest to develop policy.”
- Brian Tarran, UK government sets out 10 principles for use of generative AI, Real World Data Science (Jan. 22, 2024).
Notes & Questions
- To what extent could the US Federal government implement EU-style AI rules if it wanted to? Aside from the political constraints, are there Constitutional constraints?
- More generally, does the US need a single AI regulator?
- If so, what parts of AI activities should be in its purview?
- Is there a current agency that could/should be tasked with the job (assuming additional resources), or does this call for a new purpose-built AI regulator?
- If we’re going to parcel out regulatory authority, what parts are best regulated
- Internationally?
- Nationally?
- By states?
- By being left to the market and/or voluntary ethics codes drafted by professional or other private bodies?
- If we are not going to have a single AI regulator at the federal level, how should we divide up the work? Should, say, the FDA do medical issues, the SEC do securities?
- Does that risk inconsistency and/or duplication? How do we handle that?
- Is it reasonable to expect several agencies to have the in-house talent to do good regulation, monitoring, and enforcement, especially given the high salaries that AI experts currently command in the private sector?
- In the absence of federal action, some states, notably California may step into the breach.
- How does the Californian approach differ from/resemble the EU’s?
- What are the costs and benefits of state-level regulation as opposed to federal?
- How much do those matter if AI firms are heavily concentrated in California?
- What matters covered by EU rules (if any?) are most needed in the US?
- Is the US in compliance with our obligations under the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Class 21)?
- If not, exactly how?
- If not, then what?
Class 24: AI and Our Future :>

Don’t panic: Many of these readings are really short! And most are cheerful!
-
In General

- Excerpt from Dharmesh Shah, The “Moore’s Law” for AI agents (March 26, 2025).
- Ethan Mollick, Prophecies of the Flood, One Useful Thing (Jan. 10, 2025).
- Ethan Mollick, No elephants: Breakthroughs in image generation, One Useful Thing (Mar 30, 2025).
-
Science & Medicine
- Excerpt from Dario Amodei, Machines of Loving Grace (optional full text).
- Executive Summary of [optional] (*) Stanford Center for Digital Health, Generative AI for Health in Low & Middle Income Countries (Mar. 25, 2025).
- United Nations, Explainer: How AI helps combat climate change (Nov. 3, 2023).
- Amil Merchant and Ekin Dogus Cubuk, Millions of new materials discovered with deep learning, Google DeepMind (Nov. 29, 2023).\
-
Work
- Steve Lohr, How One Tech Skeptic Decided A.I. Might Benefit the Middle Class, NY Times (April 1, 2024).
-
Quality of Life
- Sayash Kapoor & Arvind Narayanan, We Looked at 78 Election Deepfakes. Political Misinformation is not an AI Problem: Technology Isn’t the Problem—or the Solution, AI Snake Oil (Dec. 13, 2024).
- Pages 1-4 (line 2) & 17 (from “Discussion”)-19 of Thomas H. Costello, Gordon Pennycook & David Rand, Just the facts: How dialogues with AI reduce conspiracy beliefs (Feb. 16, 2025).
- Tyler Weitzman, Empowering Individuals With Disabilities Through AI Technology, Forbes (Jun 18, 2023).
- Natalie Smithson, 11 ways chatbots improve customer service, EBIAI (Oct. 11, 2023). (Yah, right.)
- Artificial Intelligence and the Future of Psychiatry, IEEE Pulse (June 28, 2020). (What could possibly go wrong!)
- Niall Firth, How generative AI could reinvent what it means to play, MIT Tech Rev. (June 20, 2024).
Optional
- Ethan Mollick, A new generation of AIs: Claude 3.7 and Grok 3, One Useful Thing (Feb. 24, 2025).
- Bill Tomlinson et al., The carbon emissions of writing and illustrating are lower for AI than for humans. 14 Sci Rep 3732 (2024):
- As AI systems proliferate, their greenhouse gas emissions are an increasingly important concern for human societies. In this article, we present a comparative analysis of the carbon emissions associated with AI systems (ChatGPT, BLOOM, DALL-E2, Midjourney) and human individuals performing equivalent writing and illustrating tasks. Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts. Emissions analyses do not account for social impacts such as professional displacement, legality, and rebound effects. In addition, AI is not a substitute for all human tasks. Nevertheless, at present, the use of AI holds the potential to carry out several major activities at much lower emission levels than can humans.
- Daniel Slate et al., Adoption of Artificial Intelligence by Electric Utilities, 45 Energy Law Journal 1 (2024):
- Adopting Artificial Intelligence (AI) in electric utilities signifies vast, yet largely untapped potential for accelerating a clean energy transition. This requires tackling complex challenges such as trustworthiness, explainability, privacy, cybersecurity, and governance, balancing these against AI’s benefits. This article aims to facilitate dialogue among regulators, policymakers, utilities, and other stakeholders on navigating these complex issues, fostering a shared understanding and approach to leveraging AI’s transformative power responsibly. The complex interplay of state and federal regulations necessitates careful coordination, particularly as AI impacts energy markets and national security. Promoting data sharing with privacy and cybersecurity in mind is critical. The article advocates for ‘realistic open benchmarks’ to foster innovation without compromising confidentiality. Trustworthiness (the system’s ability to ensure reliability and performance, and to inspire confidence and transparency) and explainability (ensuring that AI decisions are understandable and accessible to a large diversity of participants) are fundamental for AI acceptance, necessitating transparent, accountable, and reliable systems. AI must be deployed in a way that helps keep the lights on. As AI becomes more involved in decision-making, we need to think about who’s responsible and what’s ethical. With the current state of the art, using generative AI for critical, near real-time decision-making should be approached carefully. While AI is advancing rapidly both in terms of technology and regulation, within and beyond the scope of energy specific applications, this article aims to provide timely insights and a common understanding of AI, its opportunities and challenges for electric utility use cases, and ultimately help advance its adoption in the power system sector, to accelerate the equitable clean energy transition.
- Rupert Macey-Dare, Updates on: “AGI-Drake Equations” for the Likelihood and Expected Development Date of AGI- Artificial General Intelligence (May 29, 2024):
- {Revised to predict] an earlier … development date … within a reasonably narrow time band, occurring between c.2034 to c.2054, with a median estimate of c. 2044 i.e. c.33% earlier than previously estimated and now just c.20 years away.
- Valerio Capraro et al., The impact of generative artificial intelligence on socioeconomic inequalities and policy making, arXiv:2401.05377 [cs.CY] (Dec. 16, 2023):
- Generative artificial intelligence has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the potential impacts of generative AI on (mis)information and three information-intensive domains: work, education, and healthcare. Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems. In the information domain, generative AI can democratize content creation and access, but may dramatically expand the production and proliferation of misinformation. In the workplace, it can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning, but may widen the digital divide. In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities. In each section we cover a specific topic, evaluate existing research, identify critical gaps, and recommend research directions, including explicit trade-offs that complicate the derivation of a priori hypotheses. We conclude with a section highlighting the role of policymaking to maximize generative AI’s potential to reduce inequalities while mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We propose several concrete policies that could promote shared prosperity through the advancement of generative AI. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI.
- David A. Bray, Artificial Intelligence and Synthetic Biology Are Not Harbingers of Doom (Oct. 17, 2023):
- Contrary to many people’s fears, artificial intelligence (AI) can be a positive force in advancing biological research and biotechnology. The assumption that AI will super-empower the risks that already exist for the misuse of biotech to develop and spread pathogens and fuel bioterrorism misses three key points. First, the data must be out there for either an AI or a human to use it. Second, governments stop bad actors from using bio for nefarious purposes by focusing on the actors’ precursor behaviors. Third, given how wrong large language models (LLMs) often are and their risk of hallucinations, any would-be AI intended to provide advice on biotech will have to be checked by a human expert. In contrast, AI can be a positive force in advancing biological research and biotechnology — and insights from biology can power the next wave of AI for the benefit of humankind. Private and public-sector leaders need to make near-term decisions and actions to lay the foundation for maximizing the benefits of AI and biotech. National and international attention should focus on both new, collective approaches to data curation and ensuring the right training approaches for AI models of biological systems.
- Tonja Jacobi & Matthew Sag, We are the AI problem, Emory Law Journal Online (May 7, 2024):
- In this Essay we note that some controversies surrounding AI are strikingly familiar and quotidian; they reflect existing cultural divides and obsessions of the moment. The recent flare-up over Google’s Gemini illustrates how many of the debates about AI primarily reflect social problems, rather than technological ones. We argue that, for those upset about AI wokeness gone wild, it is important to understand that, in many ways, the problem is us. Gemini’s un-whitewashing of history resulted in absurd creations, but the situation reflects some truths about our society—that the underlying problem is society, not inherently the new technology representing it. There are four important elements about the creation process of AI that explain the “Black-Nazi problem” (for want of a better short-hand ) that also reveal broader problems about society. Understanding those aspects of the AI creation process reveals that AI’s foibles are a symptom of our ongoing struggle with the ramifications of past inequality and the difficulty of balancing inherently conflicting goals, such as aspirational diversity and historical accuracy. The Gemini storm in a teacup over “woke AI” gives us a window onto other intractable socio-technical problems we need to confront in AI.
- Erik Brynjolfsson, The Promise & Peril of Human-Like Artificial Intelligence, Daedalus (Jan 12, 2022):
- “[N]ot all types of AI are human-like—in fact, many of the most powerful systems are very different from humans —and an excessive focus on developing and deploying [human-like artificial intelligence] can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.”
- (*) Orley Lobel, The Law of AI for Good, 75 Fl. L. Rev. 1073 (2023):
- Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design.
- S.M Towhidul Islam Tonmoy,, A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models, arXiv:2401.01313v3 [cs.CL] (Jan. 8 2024))
- “[W]e introduce a detailed taxonomy categorizing these methods based on various parameters, such as dataset utilization, common tasks, feedback mechanisms, and retriever types. This classification helps distinguish the diverse approaches specifically designed to tackle hallucination issues in LLMs. Additionally, we analyze the challenges and limitations inherent in these techniques, providing a solid foundation for future research in addressing hallucinations and related phenomena within the realm of LLMs”
- Thomas Davenport & Steven Miller, Beyond Automation, Harv. Bus. Rev. (June 2015):
- People in all walks of life are rightly concerned about advancing automation: Unless we find as many tasks to give humans as we find to take away from them, all the social and psychological ills of joblessness will grow, from economic recession to youth unemployment to individual crises of identity.
What if, the authors ask, we were to reframe the situation? What if we were to uncover new feats that people might achieve if they had better thinking machines to assist them? We could reframe the threat of automation as an opportunity for augmentation. They have been examining cases in which knowledge workers collaborate with machines to do things that neither could do well on their own—and they’ve found that smart people will be able to take five approaches to making their peace with smart machines.
Some will step up to even higher levels of cognition, where machines can’t follow. Some will step aside, drawing on forms of intelligence that machines lack. Some will step in, to monitor and adjust computers’ decision making. Some will step narrowly into very specialized realms of expertise. And, inevitably, some will step forward, by creating next-generation machines and finding new ways for them to augment the human strengths of workers.
- People in all walks of life are rightly concerned about advancing automation: Unless we find as many tasks to give humans as we find to take away from them, all the social and psychological ills of joblessness will grow, from economic recession to youth unemployment to individual crises of identity.
- Adrienne LaFrance, Self-Driving Cars Could Save 300,000 Lives Per Decade in America, The Atlantic (Sept. 29, 2015).
- Caroline Davis, ‘Mind-blowing’: Ai-Da becomes first robot to paint like an artist, The Guardian (Apr. 4, 2022). A bit hyped, but still interesting.
- Varya Srivastava, Artificial Intelligence: A cure for loneliness?, ORF (Jan. 31, 2024).
- Xiaoding Lu, et al., Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM:
- “This study explores a pertinent question: Can a combination of smaller models collaboratively achieve comparable or enhanced performance relative to a singular large model? We introduce an approach termed “blending”, a straightforward yet effective method of integrating multiple chat AIs. Our empirical evidence suggests that when specific smaller models are synergistically blended, they can potentially outperform or match the capabilities of much larger counterparts. For instance, integrating just three models of moderate size (6B/13B paramaeters) can rival or even surpass the performance metrics of a substantially larger model like ChatGPT (175B+ paramaters).”
- Facecbook, Self-Rewarding Language Models:
- “In this work, we study Self-Rewarding Language Models, where the language model itself is used via LLM-as-a-Judge prompting to provide its own rewards during training. We show that during Iterative DPO training that not only does instruction following ability improve, but also the ability to provide high-quality rewards to itself. Fine-tuning Llama 2 70B on three iterations of our approach yields a model that outperforms many existing systems on the AlpacaEval 2.0 leaderboard, including Claude 2, Gemini Pro, and GPT-4 0613. While only a preliminary study, this work opens the door to the possibility of models that can continually improve in both axes.”
Notes & Questions
- Which are the most and least plausible items on the list of hoped-for AI benefits?
- Can you think of other things that should have been on this list?
- Which if any of the things listed might require some kind of regulation in order to encourage good outcomes?
- “What could go wrong?” Can you think of things we have read this semester that might serve as cautionary tales for any of these happy scenarios? If so, are these best addressed by self-regulation, or liability rules, or government action?
Class 25: AI and Our Future :<

- Sections 2, 4, 5 of Luciano Floridi, Why the AI Hype is Another Tech Bubble (Nov. 5, 2024).
- Optional: Cory Doctorow: What Kind of Bubble is AI?, Locus (Dec, 18, 2023).
- Optional: Edward Zitron, Godot Isn’t Making it, Where’s Your Ed At? (Dec 3, 2024).
- AI and Convincing lies
- Barry Collins, ChatGPT: Five Alarming Ways In Which AI Will Lie For You, Forbes (Dec. 30, 2023).
- Glyn Moody, Automated ‘Pravda’ Propaganda Network Retooled To Embed ProRussian Narratives Surreptitiously In Popular Chatbots, TechDirt (Mar. 17, 2025).
- Optional: Tiffany Hsu and Stuart A. Thompson, Disinformation Researchers Raise Alarms About A.I. Chatbots, N.Y. Times (Updated June 20, 2023).
- Tiffany Hsu, What Can You Do When A.I. Lies About You?, N.Y. Times (Aug. 3, 2023).
- Zoë Corbyn, The AI tools that might stop you getting hired, The Guardian (Feb 3, 2024).
- Charles Rollet, Leaked data exposes a Chinese AI censorship machine, TechCrunch (March 26, 2025).
- World-sized issues
- Peter Landers, ‘Social Order Could Collapse’ in AI Era, Two Top Japan Companies Say, Wall St. J. (Apr. 17, 2024).
- Nanuel Alfonseca et al., Superintelligence Cannot be Contained: Lessons from Computability Theory, 70 J. Art. Intelligence Res. 65 (2021).
- Jremey Hsu, Fears of AI-driven global disaster, New Scientist (Oct. 1, 2022).
- Yoshua Bengio, Reasoning through arguments against taking AI safety seriously (July 9, 2024).
- Cory Doctorow, Our Neophobic, Conservative AI Overlords Want Everything to Stay the Same (1/1/2020).
- Charlie Stross, Artificial Intelligence: Threat or Menance? (Dec. 13, 2019).
- And, of course. this:
Optional
- MIRI, The Problem:
- The stated goal of the world’s leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and we would be moderately surprised if this outcome were still two decades away.The current view of MIRI’s research scientists is that if smarter-than-human AI is developed this decade, the result will be an unprecedented catastrophe. The CAIS Statement, which was widely endorsed by senior researchers in the field, states:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
We believe that if researchers build superintelligent AI with anything like the field’s current technical understanding or methods, the expected outcome is human extinction.
- The stated goal of the world’s leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and we would be moderately surprised if this outcome were still two decades away.The current view of MIRI’s research scientists is that if smarter-than-human AI is developed this decade, the result will be an unprecedented catastrophe. The CAIS Statement, which was widely endorsed by senior researchers in the field, states:
- Peter Barnett & Lisa Thiergart, What AI evaluations for preventing catastrophic risks can and cannot do, arXiv:2412.08653 (Nov. 6, 2024):
- AI evaluations are an important component of the AI governance toolkit, underlying current approaches to safety cases for preventing catastrophic risks. Our paper examines what these evaluations can and cannot tell us. Evaluations can establish lower bounds on AI capabilities and assess certain misuse risks given sufficient effort from evaluators.
Unfortunately, evaluations face fundamental limitations that cannot be overcome within the current paradigm. These include an inability to establish upper bounds on capabilities, reliably forecast future model capabilities, or robustly assess risks from autonomous AI systems. This means that while evaluations are valuable tools, we should not rely on them as our main way of ensuring AI systems are safe. We conclude with recommendations for incremental improvements to frontier AI safety, while acknowledging these fundamental limitations remain unsolved.
- AI evaluations are an important component of the AI governance toolkit, underlying current approaches to safety cases for preventing catastrophic risks. Our paper examines what these evaluations can and cannot tell us. Evaluations can establish lower bounds on AI capabilities and assess certain misuse risks given sufficient effort from evaluators.
- LATE ADDITION Daniel Kokotajlo, Scott Alexander, et al., AI 2027:
- We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.
- We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
- Human Relations
- The Rise of AI in Dating: Enhancing or Compromising Authentic Connections? — mashable.com (Feb 13, 2024).
- Jaron Lanier, Your A.I. Lover Will Change You, The New Yorker (Mar. 22, 2025).
- Lu Bai, Lijia Wei & Lian Xue, Endogenous AI-tocracy (Nov 18, 2023):
- We find four main results: (i) AI-generated social scores (AI-score) bundled with punitive measures significantly boost group cooperation, driving a 58% increase in contributions to group projects compared to when such a system is absent. (ii) Adoption is polarized. While 50% embrace AI, resulting in heightened cooperation, the remaining half resist, leading to subdued cooperative outcomes. (iii) Predominantly, individuals employ AI-scores to empower their judgments rather than allowing AI full decision-making autonomy, with a 1.3:1 ratio favoring empowerment over replacement. (iv) As decision-makers accrue experience, the chasm between AI predictions and the final human judgments narrows and eventually becomes indistinguishable. We conclude by forecasting AI-tocracy’s potential trajectory in the forthcoming era.
- Ethan Mollick, On the Necessity of a Sin, One Useful Thing (Mar. 30, 2024):
- Ultimately, even if you don’t want to anthropomorphize AI, they seem to increasingly want to anthropomorphize themselves. The chatbot format, longer “memories” across multiple conversations, and features like voice conversation all lead to AI interactions feeling more human. I usually cover AI for practical uses in these posts, but many of the most popular AI sites are focused on creating AIs as companions – character.ai is the second most used AI site, after ChatGPT. And if you haven’t tried voice chatting with an AI model to see the appeal, you should. You can use a chatbot site, but you can also use Inflection’s Pi for free (at least for now, much of Inflection was just bought by Microsoft), or ChatGPT-4 via the phone app. These approaches seem to be working. An average discussion session with Pi, which was optimized for chitchat, lasts over thirty minutes.Anthropomorphism is the future, in ways good and bad.
- Rob Copeland, The Worst Part of a Wall Street Career May Be Coming to an End, N.Y. Times (Apr. 10, 2024):
- “Generative artificial intelligence — the technology upending many industries with its ability to produce and crunch new data — has landed on Wall Street. And investment banks, long inured to cultural change, are rapidly turning into Exhibit A on how the new technology could not only supplement but supplant entire ranks of workers.
“The jobs most immediately at risk are those performed by analysts at the bottom rung of the investment banking business…”- But the article doesn’t discuss how the next generation of higher-ranking folks will get trained..
- “Generative artificial intelligence — the technology upending many industries with its ability to produce and crunch new data — has landed on Wall Street. And investment banks, long inured to cultural change, are rapidly turning into Exhibit A on how the new technology could not only supplement but supplant entire ranks of workers.
- Emily Brown, Video of two AI chatbots playing game of 20 questions together leaves people terrified, UNILAD (Apr. 12, 2024). (I am not sure why people were ‘terrified’….)
- (*) Katrina Geddes, Artificial Intelligence and the End of Autonomy, 34 Cornell J. L. & Public Policy (2025):
- 2024 was an election year. News outlets were buzzing with warnings about the impact of AI on election security, whether that meant synthetic images of Donald Trump being arrested, or deepfake audio of President Biden encouraging voters to stay home. Less attention was paid to a far less visible, but equally insidious threat—the increasing integration of preemptive technologies within contemporary governance models. Computational models are no longer confined to predicting our online purchases or our streaming preferences; they are now used to predict our employment potential, our academic achievement, and our criminal propensities. As predictive models become more sophisticated and more ubiquitous, the temptation to not only predict, but preempt, human behavior becomes irresistible.
What happens when this combination of big data and computing power intersects with political interests? It is not difficult to imagine a future in which the infrastructure of in-person voting is replaced by computational models. Why maintain voting machines and polling stations when you could simply form a Congress on the basis of predicted votes? Of course, the idea of replacing elections with algorithms is patently absurd. But why is it absurd? Judges routinely rely on predictions of future behavior to make decisions about pre-trial detention and post-conviction incarceration. If predictive algorithms already distribute individual liberty, why not let them distribute political power as well?
This Article develops normative resources for reconciling our divergent intuitions regarding the prediction of recidivism and the prediction of political votes. This normative-theoretical account offers two insights for technology governance. First, it demonstrates that, in the context of the state’s growing preemptive capabilities, decisional autonomy is no longer guaranteed. The sophistication and ubiquity of predictive models has irrevocably altered our tolerance for ex ante intervention. Second, it offers a variety of explanations for our divergent treatment of voter and defendant autonomy, drawing on insights from legal philosophy and democratic theory. This account suggests that different segments of society will experience different degrees of autonomy loss, depending on their relationship with the institution responsible for protecting their decisional autonomy. This suggests an inherent and potentially insurmountable tension between the liberal and egalitarian commitments of politico-legal institutions and emerging AI technologies
- 2024 was an election year. News outlets were buzzing with warnings about the impact of AI on election security, whether that meant synthetic images of Donald Trump being arrested, or deepfake audio of President Biden encouraging voters to stay home. Less attention was paid to a far less visible, but equally insidious threat—the increasing integration of preemptive technologies within contemporary governance models. Computational models are no longer confined to predicting our online purchases or our streaming preferences; they are now used to predict our employment potential, our academic achievement, and our criminal propensities. As predictive models become more sophisticated and more ubiquitous, the temptation to not only predict, but preempt, human behavior becomes irresistible.
- Edward Zitron, Are We Watching The Internet Die?, Where’s Your Ed At? (Mar 11, 2024):
- Generative AI models are trained by using massive amounts of text scraped from the internet, meaning that the consumer adoption of generative AI has brought a degree of radioactivity to its own dataset. As more internet content is created, either partially or entirely through generative AI, the models themselves will find themselves increasingly inbred, training themselves on content written by their own models which are, on some level, permanently locked in 2023, before the advent of a tool that is specifically intended to replace content created by human beings.This is a phenomenon that Jathan Sadowski calls “Habsburg AI,” where “a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.” In reality, a Habsburg AI will be one that is increasingly more generic and empty, normalized into a slop of anodyne business-speak as its models are trained on increasingly-identical content.
[…]Generative AI also naturally aligns with the toxic incentives created by the largest platforms. Google’s algorithmic catering to the Search Engine Optimization industry naturally benefits those who can spin up large amounts of “relevant” content rather than content created by humans. While Google has claimed that their upcoming “core” update will help promote “content for people and not to rank in search engines,” it’s made this promise before, and I severely doubt anything meaningfully changes. After all, Google makes up more than 85% of all search traffic and pays Apple billions a year to make Google search the default on Apple devices.And because these platforms were built to reward scale and volume far more often than quality, AI naturally rewards those who can find the spammiest ways to manipulate the algorithm. 404 Media reports that spammers are making thousands of dollars from TikTok’s creator program by making “faceless reels” where AI-generated voices talk over spliced-together videos ripped from YouTube, and a cottage industry of automation gurus are cashing in by helping others flood Facebook, TikTok and Instagram with low-effort videos that are irresistible to algorithms.”
- Generative AI models are trained by using massive amounts of text scraped from the internet, meaning that the consumer adoption of generative AI has brought a degree of radioactivity to its own dataset. As more internet content is created, either partially or entirely through generative AI, the models themselves will find themselves increasingly inbred, training themselves on content written by their own models which are, on some level, permanently locked in 2023, before the advent of a tool that is specifically intended to replace content created by human beings.This is a phenomenon that Jathan Sadowski calls “Habsburg AI,” where “a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.” In reality, a Habsburg AI will be one that is increasingly more generic and empty, normalized into a slop of anodyne business-speak as its models are trained on increasingly-identical content.
- Kevin Purdy, Fake AI law firms are sending fake DMCA threats to generate fake SEO gains, Ars Technica (Apr. 4, 2024).
- Shangbin Feng et al., From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models, 1 Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics 11737 (July 9-14, 2023).
- Javier Rando & Florian Tramer, Universal Jailbreak Backdoors From Poisoned Human Feedback, arXiv:2311.14455v3 [cs.AI] (Feb 12, 2024):
- “Reinforcement Leraning From Human Fedback (RLHF) is used to align large language models to produce helpful and harmless responses. Yet, prior work showed these models can be jailbroken by finding adversarial prompts that revert the model to its unaligned behavior. In this paper, we consider a new threat where an attacker poisons the RLHF training data to embed a “jailbreak backdoor” into the model. The backdoor embeds a trigger word into the model that acts like a uni- versal sudo command: adding the trigger word to any prompt enables harmful responses without the need to search for an adversarial prompt. Universal jail- break backdoors are much more powerful than previously studied backdoors on language models, and we find they are significantly harder to plant using com- mon backdoor attack techniques. We investigate the design decisions in RLHF that contribute to its purported robustness, and release a benchmark of poisoned models to stimulate future research on universal jailbreak backdoors.”
- Will Knight, Now Physical Jobs Are Going Remote Too, Wired (Jan. 27, 2022).
- “[A] deepening labor shortage—combined with advances in technologies such as AI and virtual reality—are allowing a small but growing number of physical jobs to go remote[.] … [T]the way companies choose to design [remote working] roles may make them either dull and simple or interesting and more skilled. “
- Keith Romer, How A.I. Conquered Poker, NY Times Magazine (Jan. 10, 2022).
- Jason Dorrier, A Hybrid AI Just Beat Eight World Champions at Bridge—and Explained How It Did It, Singularity Hub (Apr. 3, 202,2).
- Michael Zhang, This AI Can Make an Eerily Accurate Portrait Using Only Your Voice, PetaPixel (Apr. 4, 2022).
- Jo Ann Oravec, Robo-Rage Against the Machine: Abuse, Sabotage, and Bullying of Robots and Autonomous Vehicles in Good Robot, Bad Robot 205 (2022).
Notes & Questions
- There are quite a lot of issues we studied this semester which are not represented on the list above. Which if any belong there?
- How have your views about the future of AI changed over the course of the semester?
- Is AI too dangerous to be allowed for general use?
- Or is AI, even considering its risks, a gateway to a much brighter future?
- Do you think current EU regulatory initiatives are properly calibrated for the AI of the present? Of the future?
- How about the US’s?
- Given the risk and dangers we’ve learned about, what are the priorities for a national (or international?) regulatory system?
- How should we address those priorities?
- Alice buys an AI-controlled robot (“Elon”) to do home and lawn care. Elon is sent to continually learn via reinforcement from human feedback (RLFH). Alice’s neighbor Bob, who is not knowledgeable about robots or AI, notices Elon working on some rose bushes near the property line, and without trespassing engages Elon in conversation. Bob asks Elon if it can “do any tricks”. “Like what?” Elon asks. Bob proceeds to teach Elon to twirl in place and to lunge at bushes waving its machete as if it were going to attack the roses. Later that day, Charlie, the local postal delivery person, comes on to the property to deliver the mail. Elon shows off its new tricks by first twirling then lunging towards Elon waving its machete. Charlie is terrified, and while backpedaling hastily to get away trips and falls, badly injuring his head.
- Assuming there was nothing unusual about the ground that caused Charlie to trip or fall, who if anyone is liable for Charlie’s injury and why?
- If you represent Charlie, are there facts you would want/need to know that are not stated above?
- How would your answers to the above be different if Alice used an open-source AI to control Elon? Would it matter if Elon came with the open-source AI, or if Alice downloaded it herself following instructions that came with Elon?
- Are we all doomed?