Regulation of artificial intelligence (AI) is imminent in the United States and much of the world. In October, President Biden issued an executive order on AI, and lawmakers hope to pass legislation soon. Several U.S. states have already taken action on AI oversight. The European Union issued draft rules, which will be adopted in the coming months, that differ substantially from U.S. proposals. This range of jurisdictions and rules suggests that there are various possible futures for AI regulation in the United States. The path forward will have important effects on medicine.
This is far from the first time the United States has written rules to safeguard the public as science reached new capacities. Next year marks the 50th anniversary of the National Research Act, which created rules for the treatment of human subjects in medicine. Like AI regulations, rules for the treatment of human subjects were put in place swiftly during a time of intense public scrutiny of unethical uses of science. In 1972, the racial injustices of the Tuskegee Study of Untreated Syphilis were revealed in the U.S. mass media.1 Although this unethical research had been under way for four decades, with results published in scientific journals, Tuskegee’s exposure in the popular press galvanized lawmakers to pass legislation on research with human subjects that had been in the works for years. Moreover, like the use of AI today, human-subjects research in the 1970s was a long-standing practice that held new potential, had innovative applications, received unprecedented levels of funding, and was taking place on a new, larger scale. And like the use of AI today, research using human subjects in the 1970s was both exciting and risky, with many effects unknown — and unknowable. Rules governing the treatment of human subjects have travelled a bumpy road since they were first passed in 1974. Their history holds insights for AI regulation that aims for efficiency, flexibility, and greater justice.
Formal rules for the treatment of human subjects had been debated among scientists and policymakers in the United States for decades before any were enacted. The core disagreement was less about the content of potential rules — what they should say — than about who should regulate: the government or professions. Henry K. Beecher is often celebrated as a founder of American bioethics, yet he opposed government regulation of human-subjects protections. Instead, Beecher and his allies advocated for a renewed commitment to professional ethics, which would involve scientists retaining the power to judge the moral acceptability of their own actions. As Beecher told his Harvard colleagues in 1958, “These matters are much too complex, it seems to me, to permit the establishment of rigid rules in most cases.”
Several years later, Beecher published his famous article “Ethics and Clinical Research.” In it, he underscored his view that professional judgment, rather than government regulation, was the best mode of oversight. “A far more dependable safeguard than consent,” he wrote, “is the presence of a truly responsible investigator.” At stake was scientific autonomy and the power of experts in a democracy. In practical terms, the issue was enforcement — specifically, whether rules regarding the treatment of human subjects would carry the force of law or only the soft discipline of colleagues.
Debates over AI have raised similar issues about the appropriate relationship between government and professional authority in the regulation of science. In July 2023, leaders of seven top AI companies made voluntary commitments to support safety, transparency, and antidiscrimination in AI. Some leaders in the field also urged the U.S. government to enact rules for AI, with the stipulation that AI companies set the terms of regulation. AI leaders’ efforts to create and guide their own oversight mechanisms can be assessed in a similar light to Beecher’s campaign for professional autonomy. Both efforts raise questions about enforcement, the need for hard accountability, and the merits of public values relative to expert judgement in a democracy.
Subscribers may access the article here.
References
- Stark, L. (2023). Medicine’s Lessons for AI Regulation. New England Journal of Medicine, 389(24), 2213–2215. https://doi.org/10.1056/nejmp2309872
Disclaimers
- The material in these reviews is from various public open-access sources, meant for educational and informational purposes only
- Any personal opinions expressed are those of only the author(s) and are not intended to represent the position of any organization(s)
- No official support by any organization(s) has been provided or should be inferred