Exploring Emerging Topics in Artificial Intelligence Policy | MIT News

Members of the public sector, private sector and academia came together last month for the second AI Policy Forum Symposium to explore the directions and critical questions posed by artificial intelligence in our economies. and our societies.

The virtual event, hosted by the AI ​​Policy Forum (AIPF) – an MIT Schwarzman College of Computing endeavor aimed at connecting high-level principles of AI policy to governance practices and trade-offs – brought together a range of eminent panelists to deepen four cross-cutting themes: law, audit, health and mobility.

Over the past year, there have been substantial changes in the regulatory and policy landscape around AI in several countries – notably in Europe with the development of the European Union’s Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence. . In the United States, the National AI Initiative Act of 2020, which took effect in January 2021, proposes a federally coordinated program to accelerate the research and application of AI for economic prosperity and gains. of security. Finally, China recently brought forward several new regulations of its own.

Each of these developments represents a different approach to AI legislation, but what makes a good AI law? And when should AI legislation be based on binding rules with sanctions rather than setting voluntary guidelines?

Jonathan Zittrain, professor of international law at Harvard Law School and director of the Berkman Klein Center for Internet and Society, says the self-regulatory approach taken during the expansion of the Internet had its limits, as companies struggled to balance their interests with those of their industry and the public.

“A lesson could be that having representative government play an active role from the start is a good idea,” he says. “It’s just that they’re challenged by the fact that there seems to be two phases in this regulatory environment. One, too early to tell, and two, too late to do anything. In AI, I think a lot of people would say we’re still at the “too early to tell” stage, but given that there’s no in-between until it’s too late , this may still require regulation.

A theme that came up repeatedly throughout the first AI Laws panel — a conversation moderated by Dan Huttenlocher, Dean of MIT Schwarzman College of Computing and Chairman of the AI ​​Policy Forum — was the notion of trust. “If you told me the truth consistently, I would say you are an honest person. If AI could provide something similar, something that I can say is consistent and identical, then I would say it is trusted AI,” says Bitange Ndemo, professor of entrepreneurship at the University. of Nairobi and former Permanent Secretary of Kenya’s Ministry of Information. and Communications.

Eva Kaili, Vice-President of the European Parliament, adds that “In Europe, every time you use something, like any medicine, you know it has been checked. You know you can trust him. You know the controls are there. We have to do the same with AI. Kalli further points out that building trust in AI systems will not only lead people to use more applications in a safe way, but the AI ​​itself will benefit as greater amounts of data will be generated. Consequently.

The rapidly growing applicability of AI in all fields has prompted the need to address both the opportunities and challenges of emerging technologies and the impact they have on social and ethical issues such as privacy. , fairness, fairness, transparency and accountability. In healthcare, for example, new machine learning techniques have shown tremendous promise for improving quality and efficiency, but issues of equity, data access and privacy, security and reliability, immunology and global health surveillance remain relevant.

MIT’s Marzyeh Ghassemi, assistant professor in the Department of Electrical Engineering and Computer Science and the Institute of Medical Engineering and Science, and David Sontag, associate professor of electrical engineering and computer science, collaborated with Ziad Obermeyer, associate professor of Health Policy and Management at the University of California Berkeley School of Public Health, to host AIPF Health Wide Reach, a series of sessions to discuss issues of data sharing and privacy in clinical AI. Organizers have brought together dedicated AI, policy and health experts from around the world to understand what can be done to reduce barriers to accessing high-quality health data in order to make advancing more innovative, robust and inclusive research results while being respectful. patient privacy.

During the series, group members presented a topic of expertise and were tasked with proposing concrete policy approaches to the challenge being discussed. Building on these wide-ranging conversations, attendees unveiled their findings at the symposium, covering nonprofit and government success stories and limited access models; upward demonstrations; legal frameworks, regulations and financing; technical approaches to privacy; and infrastructure and data sharing. The group then discussed some of their recommendations which are summarized in a soon to be released report.

One of the results calls for the need to make more data available for research. Recommendations that flow from this discovery include updating regulations to promote data sharing to allow easier access to safe harbors such as the Health Insurance Portability and Accountability Act (HIPAA) for anonymization, as well as increased funding for private health facilities to maintain datasets. , among others. Another finding, aimed at removing data barriers for researchers, supports a recommendation to reduce barriers to research and development on health data created by the federal government. “If it’s data that should be accessible because it’s funded by a federal entity, we should easily establish the steps that will be part of accessing that so that it’s a whole more inclusive and equitable research opportunities for all,” says Ghassemi. The group also recommends taking a close look at the ethical principles that govern data sharing. Although there are already many proposed principles on this, Ghassemi states that “obviously you cannot satisfy all the levers or buttons at once, but we think it is a compromise that it is very important to think intelligently”.

Besides law and healthcare, other facets of AI policy explored at the event included auditing and oversight of large-scale AI systems, as well as the role that AI plays in mobility and the range of technical, commercial and political challenges for autonomous vehicles in particular.

The AI ​​Policy Forum Symposium was an effort to bring together communities of practice with the common goal of designing the next chapter of AI. In his closing remarks, Aleksander Madry, Cadence Designs Systems Professor of Computing at MIT and co-lead of the AI ​​Policy Forum, highlighted the importance of collaboration and the need for different communities to communicate with each other in order to truly make impact in the AI ​​policy space.

“The dream here is that we can all come together – researchers, industry, policy makers and other stakeholders – and really talk to each other, understand each other’s concerns and brainstorm solutions together,” Madry said. “That’s the mission of the AI ​​Policy Forum and that’s what we want to enable.”

Comments are closed.