Today’s accelerated computing has opened up a world beyond human limitations. Our new ability to process staggering amounts of data in seconds has led to myriad breakthroughs in science, technology, and even the arts. Machine learning (ML) now enables artificial intelligence (AI) algorithms to incrementally learn from the data that it processes. This equips such programs with the ability to make autonomous decisions, and even create novel textual, visual, and auditory content—an application known as “generative AI.”
Currently, the most prominent example of generative AI is ChatGPT, which can devise novel academic writings that eerily approximate human-created text—advanced enough to pass business school and law school exams. Similar generative AI tools can be used to generate new images or video based on user text prompts: OpenAI’s GPT-3 was used to create Nothing, Forever, an endless, crudely animated version of Seinfeld on a 24/7 Twitch stream.
These innovations present important opportunities for the advancement of humanity: In medicine, AI is helping researchers to identify potential drug molecules, cutting out the time and risk associated with clinical trials, while humanitarian organizations like UN Global Pulse are finding new ways to leverage big data to promote social good. However, their ultimate cultural impact—whether positive or negative—will depend on the uses to which this technology is put. In turn, the widespread use of AI and ML raises important legal questions and has spurred important conversations surrounding intellectual property and ownership, as well as accountability and human values.
Innovations such as AI and ML present a thrilling potential for the advancement of humanity.
Creators of the prevailing legal standards for copyright and other intellectual property never anticipated the onset of AI and ML. Old legal principles will have to be reinterpreted to apply to machine learning and its outputs. Current regulation of AI is sparse and indirect. There is no binding international agreement governing AI, though international bodies like the Organisation for Economic Co-operation and Development and United Nations Educational, Scientific and Cultural Organization have published guidelines that are meant to serve as blueprints for national and local regulation. These soft law principles provide an important high-level starting point for establishing what constitutes “ethical” use of AI.
There are three main approaches to AI regulation: the risk-based approach of the European Union (EU), the sector-specific approach of the United States, and a “per-application” approach. The EU’s risk-based standard stratifies AI technologies across four groups based on how they are used and places varying levels of restriction on each strata. So far, the EU model is the platinum standard for AI regulation. The US does not yet have a federal standard and has instead taken a more sector-specific approach, with federal agencies such as the Food and Drug Administration, Federal Trade Commission, and Equal Employment Opportunity Commission offering their own guidelines for the ethical use of AI. Some state laws also address specific uses of AI, such as Illinois’ HB 53 concerning video interviewing and Colorado’s SB 169 targeting discrimination in insurance data collection. The third regulatory approach is to target the application of AI technologies, as opposed to adopting legislation that targets it as a technology class—which is ever-evolving.
The need for greater AI regulation is clear: Elon Musk has argued that it is “more of a risk to humanity than cars, planes, or medicine” and that more stringent regulation now will be beneficial, even if it significantly slows the development of AI. The risk of AI is heightened by its real-life impact and potential to infringe on basic human rights, in applications as varied as medical diagnoses, job candidate screening, home loan approvals, and jail sentencing recommendations.
Moving forward, it will be important for lawmakers to align AI regulation with human values such as transparency surrounding the ways AI programs actually make decisions; justifiable rationale of such decisions; fairness in the way data are collected, utilized, and bias-tested; and accountability when such technologies malfunction or display significant bias.
Until meaningful regulation is in place, companies should be proactive in their approach to internal governance, risk assessment, and communication to ensure accountability for their use of AI. Those that prioritize the integration of compliance into their business practices will not only spare themselves steep litigation-related expenses but will also be more competitive to consumers who value privacy, transparency, and fairness in the AI programs that they utilize.