Submission
Privacy Policy
Code of Ethics
Newsletter

Charles N.W. KECKLER: What The Administrative State Could Offer in Regulating Artificial Intelligence: An IA for AI?

The Artificial Intelligence (AI) Act has prompted discussion in Europe and beyond over what its adoption might mean for the Union and its Member States as well as for their populations. There is a similar and thoughtful conversation currently blossoming in the United States as well, as Congress is now beginning to examine artificial intelligence in a serious and bipartisan way. Senate Majority Leader Schumer has announced AI Insight Forums this fall, convening experts from multiple disciplines to deliberate on this evolving technology. This is however only the stepping stone to further, much more difficult conversations, one of which I intend to start in this post. If these ‘insight forums’ are successful, one insight that could emerge from these discussions – and I would encourage it to be consciously and deliberately raised – is the need to institutionalize a bipartisan, autonomous, expert, and deliberative process regarding the management of AI. 

In other words, one of the takeaways should be that the federal government needs a permanent body to address AI. How we have approached transformative technologies in the past implies that this establishment should be a new bipartisan commission, created by Congress as a part of the Executive Branch, but with statutory independence. This new independent agency (the IA for AI mentioned in the title) would be built on the legal and organizational template developed over the last century and a half of American administrative law, reflected in longstanding entities such as the Securities and Exchange Commission or the Federal Communications Commission (FCC).  In such agencies, a bare majority of commissioners are selected solely by the President, while the remaining members are proposed by the opposition party in the Senate (currently Republicans). After confirmation by the Senate, commissioners can be removed by the President during their terms of office only for good cause, rather than for a mere policy disagreement, and are thus deemed to offer “independent” judgment. For an issue as complex as AI, a substantial commission of perhaps nine to fifteen members would be appropriate, because it could reflect multiple viewpoints beyond just partisan perspectives: offering expertise on the technological, economic, legal, and social dimensions AI implicates. 

I do not make this recommendation of this kind lightly. Even when confronted with fundamentally new problems, we should consider first if our existing government can be adapted before creating any new organization. Moreover, independent agencies bring about legitimate concerns regarding constitutional accountability –mitigated but not fully eliminated by having them led by appointees from both parties who can oversee one another. However, in this circumstance, it is apparent that ultimately AI will neither remain ungoverned nor solely in the purview of the states. 

Sooner or later there will be a national policymaking apparatus, and none of the existing federal agencies – including, notably the Federal Trade Commission, which has tried to extend its powers to exercise jurisdiction in this area – have the necessary clarity of mission, imprimatur of Congress, or technical expertise. Other possible solutions such as an expansion of the Office of Science and Technology Policy within the White House, lack the independent resources and authority needed and will always remain vulnerable to partisan imperatives. A high-performing agency requires three critical components: (1) a clearly defined mission; (2) the resources and authorities to carry out the mission, and; (3) an intellectually pluralistic leadership that resists groupthink and prevents mission deviations in any direction. For AI, no existing organization meets these criteria – new governance is needed, and it would naturally tend to take the independent agency form

The independent agency is, in fact, our historically typical institutional response to transformative technologies. The Interstate Commerce Commission, the first independent agency, arose in the wake of the challenges posed by railroads. The FCC was created in response to the radio revolution, the Civil Aviation Board oversaw the aviation industry’s growth, and the Atomic Energy Commission governed the dawn of the nuclear era. Whether AI will ultimately be the most profound innovation of the twenty-first century, as both its enthusiasts and its critics believe, is yet to be seen. But it is certainly likely to prove as complex and revolutionary as atomic energy, wireless communication, the airplane, and the railroad. Despite the imperfections of the independent agency form, it has shown itself effective in addressing the complexities of these new technologies; the time-limited and advisory National Security Commission on Artificial Intelligence already generated valuable insights and recommendations before terminating at the end of the 2021 fiscal year. By acting in a bipartisan and autonomous way, these commissions made concrete an American commitment to incorporate key innovations into our collective life independent of our broader political debates. 

More practically, most independent agencies are products of divided government, maintaining the bipartisan perspectives behind the creation of the agency. My doctoral research, as well as my government experience, suggests that this tension can serve a positive role, as commissioners with different views keep each other honest, prevent mission creep, and maintain the appropriate focus on tasks Congress and the American people want the agency to perform. Understanding that we cannot and should not wait to get started in the critical task of AI governance, a commission is not only the better choice but the only realistic choice for a new agency with significant powers. Bipartisan leadership on this type of issue is the proper response, but often that conclusion can only be arrived at under a divided government – when it emerges as both a political compromise and a functional solution. 

Of course, there is generally little appetite on the right side of the aisle for creating new federal authorities and agencies. The energy is rather directed toward consolidating programs and trimming back federal power, and I am sympathetic to that view as a constitutionalist. In this circumstance, though, conservatives’ healthy skepticism of the national government can be reconciled with the need to start responding to a generational national challenge, by carefully limiting the jurisdiction of an initial commission overseeing AI. The statutory authority of this body should at first, and at least for several years, be restricted to the civilian artificial intelligence programs of the federal government itself, including the federal work of its contractors and grantees. In line with Executive Order 13,960 (which I was honored to participate in developing), the civilian-use commission would encourage agencies to adopt beneficial AI but to do so safely and in line with American values. Crucially, however, a commission would have the resources and regulatory authority to sponsor its initiatives and enforce its guardrails. Starting with the government regulating its own use has been the sensible approach taken by Sen. Peters leading the Committee on Homeland Security and Governmental Affairs, and not coincidentally, it is the area where Congress has had real if modest, legislative success. The AI in Government Act, for instance, laid the groundwork for a robust strategic review of AI usage we conducted at the Department of Health and Human Services. Unfortunately, most other agencies were less successful in implementation; but a centralized body with authority could provide the sometimes-missing ingredients of prioritization and executive leadership. 

The governance model proposed here, by starting quickly, but in a limited and well-defined fashion, draws inspiration from AI itself. The model for creating new technologies is iterative and incremental development, in which the learning extracted from one stage lays the groundwork for the next. A regulatory body claiming complete power over all AI would be unrealistic and overly intrusive, inevitably out of its depth unless it radically restricted innovation. By contrast, a limited commission could have both the capability and authority to deeply but securely investigate AI employed within the government, set standards for auditability, and monitor the dynamic evolution of models to assess their stability and performance over time. Just as an AI ingests an initial set of well-characterized data as a training set before being applied to novel input, government applications can serve as a kind of “training set” for AI governance. In parallel with oversight and inquiry into government AI, the commission should have a research budget, and be empowered to engage with the private sector, academia, and the public. Through case studies and evaluations of the government’s AI projects, the commission can assess the risks, benefits, and effectiveness of different regulatory approaches. This information will prove invaluable if and when the commission expands its scope to regulate AI in the private sector. Regardless of any future expansion, by developing robust guidelines and best practices for AI implementation within the government, the commission can establish a broader model of responsible and ethical AI usage.

Perhaps the most critical deficit for our government in the twenty-first century is the public’s lack of trust. Given the power and opacity of AI, it is of special importance that any entity regulating it overcome the mistrust and cynicism that attaches to our institutions, both new and old. Although institutionalizing bipartisanship will go partway to addressing this, there is no royal road to credibility – it must be earned, and this requires time.

Proposals to empower new government entities to regulate or even own and control all private AI models face insurmountable challenges of trustworthiness and competence. A fortiori, yet more justified suspicion from national populations will inevitably attach to well-intentioned but unrealistic plans for international governance of AI, like that of Bremmer and Suleyman in the most recent issue of Foreign Affairs, given that even national governance systems have yet to be successfully proved. I agree with those authors that AI’s “complexity and the speed of its advancement will make it almost impossible for governments to make relevant rules at a reasonable pace. If governments do not catch up soon, it is possible they never will.” Realistically, however, their proposal will frustrate that very goal, by wasting time we do not have on overly ambitious governance plans, and delaying the kind of feasible next steps widely recognized as urgently needed.

To maintain public and industry support, any AI governance agency will need a track record, and the time to begin building it is now. If and when Congress chooses to move toward more substantive regulation, it will have a solid political, organizational, and technical foundation on which to do so. The alternative is to begin an agency – probably in reaction to some future crisis – at square one. In a field of immense complexity and dynamism, such a reactive (non)strategy will be at best ineffective, and at worst generate hasty, ill-considered policy errors. Instead, we can act now to craft a forum where the government can learn before acting, and in the process of learning, teach. 

Creating a new federal executive agency is never easy, particularly for Republicans, and is understandably even more difficult when the President is a Democrat. Yet a bipartisan independent commission created now, while Republicans have the House, is the one sure method to guarantee a conservative perspective will always have a seat at the table whenever our national strategy for AI is shaped. The willingness of the Senate Majority Leader and the President to approach this issue in a relatively bipartisan way creates an opportunity to take the first logical step toward sensible AI governance before the uncertainties of election-year politics cause the legislative possibilities to vanish. Precisely because a carefully circumscribed independent agency for artificial intelligence in government is only a beginning, it is achievable, and able to put us on the road to a safe and prosperous America in which our innovation is working for and with our citizens, rather than against or in place of them. We do not need a heavy hand on innovation, but we do need to keep an eye on this transformative technology; we need an IA for AI. 


Charles N. W. Keckler is a graduate of Harvard College, where he was elected to Phi Beta Kappa and received his B.A. in Anthropology, magna cum laude. He went on to receive his M.A. in Anthropology, and his J.D., from the University of Michigan. He has served, during two presidential administrations, in several senior appointed positions in the U.S. Department of Health and Human Services, including Senior Advisor to the Secretary and Acting Deputy Secretary, and from 2017-2020, led the Department’s award-winning transformation initiative, ReImagine HHS. Between his periods at HHS, he was twice confirmed by the Senate as a minority party member of the Board of Directors of the Legal Services Corporation. His academic experience has included teaching courses in various disciplines at Harvard, the University of Michigan, the University of New Mexico, Northwestern, Pennsylvania State University, Washington & Lee, and George Mason University.

Print Friendly, PDF & Email