+ A
A -
Frank Pasquale &
Gianclaudio Malgieri
NYT
Americans have good reason to be skeptical of artificial intelligence. Tesla crashes have dented the dream of self-driving cars. Mysterious algorithms predict job applicants’ performance based on little more than video interviews. Similar technologies may soon be headed to the classroom, as administrators use “learning analytics platforms” to scrutinize students’ written work and emotional states. Financial technology companies are using social media and other sensitive data to set interest rates and repayment terms.
Even in areas where A.I. seems to be an unqualified good, like machine learning to better spot melanoma, researchers are worried that current data sets do not adequately represent all patients’ racial backgrounds.
US authorities are starting to respond. Massachusetts passed a nuanced law this spring limiting the use of facial recognition in criminal investigations. Other jurisdictions have taken a stronger stance, prohibiting the use of such technology entirely or requiring consent before biometric data is collected. But the rise of AI requires a more coordinated nationwide response, guided by first principles that clearly identify the threats that substandard or unproven AI poses. The United States can learn from the European Union’s proposed AI regulation.
In April, the European Union released a new proposal for a systematic regulation of artificial intelligence. If enacted, it will change the terms of the debate by forbidding some forms of A.I., regardless of their ostensible benefits. Some forms of manipulative advertising will be banned, as will real-time indiscriminate facial recognition by public authorities for law enforcement purposes.
The list of prohibited AI uses is not comprehensive enough — for example, many forms of nonconsensual AI-driven emotion recognition, mental health diagnoses, ethnicity attribution and lie detection should also be banned. But the broader principle — that some uses of technology are simply too harmful to be permitted — should drive global debates on AI regulation.
The proposed regulation also deems a wide variety of AI high risk, acknowledging that AI presents two types of problems. First, there is the danger of malfunctioning AI harming people or things — a threat to physical safety. Under the proposed EU regulation, standardization bodies with long experience in technical fields are mandated to synthesize best practices for companies — which will then need to comply with those practices or justify why they have chosen an alternative approach.
Second, there is a risk of discrimination or lack of fair process in sensitive areas of evaluation, including education, employment, social assistance and credit scoring. This is a risk to fundamental rights, amply demonstrated in the United States in works like Cathy O’Neil’s “Weapons of Math Destruction” and Ruha Benjamin’s “Race After Technology.” Here, the EU is insisting on formal documentation from companies to demonstrate fair and nondiscriminatory practices. National supervisory authorities in each member state can impose hefty fines if businesses fail to comply.
To be sure, Europe’s proposal is far from perfect, and the EU is not alone in considering the problems of artificial intelligence. The United States is starting to grope toward basic standards of AI regulation as well. In April, the Federal Trade Commission clarified a 2020 guidance document on AI, stating that US law “prohibits the sale or use of … racially biased algorithms.”
However, the problems posed by unsafe or discriminatory AI do not appear to be a high-level Biden administration priority. As a remarkable coalition of civil rights and technology policy organizations complained this month: “Since assuming office, this administration has not pursued a public and proactive agenda on the civil rights implications of AI In fact, the Trump administration’s executive orders and regulatory guidance on AI remain in force, which constrains agencies across the federal government in setting policy priorities.”
Things are somewhat better on the state level. A more robust proposal is now under discussion in California to regulate public contracts for the provision of AI-based products and services. Legislators in Washington State are discussing a similar proposal. The proposed California law has some elements in common with the European approach and with the Canadian model of “Algorithmic Impact Assessment,” designed to mitigate bias and unfairness in emerging AI for public administration. Despite its limited scope, the California proposal would require tech companies that provide AI to state agencies to prepare a detailed data management plan, to make algorithms explainable even to a nonexpert audience and to prevent discriminatory biases.
The states can accomplish a lot on their own. However, the real challenge now is national leadership. The Biden administration should harmonize the US approach with that of Europe, committing to require “high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness” for high-risk AI systems, as the proposed European AI Act puts it. For example, if a machine is going to decide whether or not you are hired for a job, at the very least you deserve regulatory oversight to ensure that it is using proper data, that it has actually performed well and in a nondiscriminatory way in the past and that you can appeal to someone if you can demonstrate it has made a mistake. And if the system is based on pseudoscientific claptrap, you should not be judged by it at all.
Federal agencies like the Equal Employment Opportunity Commission can either address these problems under existing law or propose statutory language to grant them the authority to do so. But they need to act more aggressively now, while the technology is still developing. The White House needs to bring agency leaders together to learn from experts about best practices, and solicit comments from those affected by AI. This approach would both democratize and professionalize US technology policy in crucial areas.
AI developers should not simply “move fast and break things,” to quote an early Facebook motto. Real technological advance depends on respect for fundamental rights, ensuring safety and banning particularly treacherous uses of artificial intelligence. The EU is now laying the intellectual foundations for such protections, in a wide spectrum of areas where advanced computation is now (or will be) deployed to make life-or-death decisions about the allocation of public assistance services, the targets of policing and the cost of credit. While its regulation will never be adopted verbatim by the United States, there is much to learn from its comprehensive approach.
Gianclaudio Malgieri
NYT
Americans have good reason to be skeptical of artificial intelligence. Tesla crashes have dented the dream of self-driving cars. Mysterious algorithms predict job applicants’ performance based on little more than video interviews. Similar technologies may soon be headed to the classroom, as administrators use “learning analytics platforms” to scrutinize students’ written work and emotional states. Financial technology companies are using social media and other sensitive data to set interest rates and repayment terms.
Even in areas where A.I. seems to be an unqualified good, like machine learning to better spot melanoma, researchers are worried that current data sets do not adequately represent all patients’ racial backgrounds.
US authorities are starting to respond. Massachusetts passed a nuanced law this spring limiting the use of facial recognition in criminal investigations. Other jurisdictions have taken a stronger stance, prohibiting the use of such technology entirely or requiring consent before biometric data is collected. But the rise of AI requires a more coordinated nationwide response, guided by first principles that clearly identify the threats that substandard or unproven AI poses. The United States can learn from the European Union’s proposed AI regulation.
In April, the European Union released a new proposal for a systematic regulation of artificial intelligence. If enacted, it will change the terms of the debate by forbidding some forms of A.I., regardless of their ostensible benefits. Some forms of manipulative advertising will be banned, as will real-time indiscriminate facial recognition by public authorities for law enforcement purposes.
The list of prohibited AI uses is not comprehensive enough — for example, many forms of nonconsensual AI-driven emotion recognition, mental health diagnoses, ethnicity attribution and lie detection should also be banned. But the broader principle — that some uses of technology are simply too harmful to be permitted — should drive global debates on AI regulation.
The proposed regulation also deems a wide variety of AI high risk, acknowledging that AI presents two types of problems. First, there is the danger of malfunctioning AI harming people or things — a threat to physical safety. Under the proposed EU regulation, standardization bodies with long experience in technical fields are mandated to synthesize best practices for companies — which will then need to comply with those practices or justify why they have chosen an alternative approach.
Second, there is a risk of discrimination or lack of fair process in sensitive areas of evaluation, including education, employment, social assistance and credit scoring. This is a risk to fundamental rights, amply demonstrated in the United States in works like Cathy O’Neil’s “Weapons of Math Destruction” and Ruha Benjamin’s “Race After Technology.” Here, the EU is insisting on formal documentation from companies to demonstrate fair and nondiscriminatory practices. National supervisory authorities in each member state can impose hefty fines if businesses fail to comply.
To be sure, Europe’s proposal is far from perfect, and the EU is not alone in considering the problems of artificial intelligence. The United States is starting to grope toward basic standards of AI regulation as well. In April, the Federal Trade Commission clarified a 2020 guidance document on AI, stating that US law “prohibits the sale or use of … racially biased algorithms.”
However, the problems posed by unsafe or discriminatory AI do not appear to be a high-level Biden administration priority. As a remarkable coalition of civil rights and technology policy organizations complained this month: “Since assuming office, this administration has not pursued a public and proactive agenda on the civil rights implications of AI In fact, the Trump administration’s executive orders and regulatory guidance on AI remain in force, which constrains agencies across the federal government in setting policy priorities.”
Things are somewhat better on the state level. A more robust proposal is now under discussion in California to regulate public contracts for the provision of AI-based products and services. Legislators in Washington State are discussing a similar proposal. The proposed California law has some elements in common with the European approach and with the Canadian model of “Algorithmic Impact Assessment,” designed to mitigate bias and unfairness in emerging AI for public administration. Despite its limited scope, the California proposal would require tech companies that provide AI to state agencies to prepare a detailed data management plan, to make algorithms explainable even to a nonexpert audience and to prevent discriminatory biases.
The states can accomplish a lot on their own. However, the real challenge now is national leadership. The Biden administration should harmonize the US approach with that of Europe, committing to require “high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness” for high-risk AI systems, as the proposed European AI Act puts it. For example, if a machine is going to decide whether or not you are hired for a job, at the very least you deserve regulatory oversight to ensure that it is using proper data, that it has actually performed well and in a nondiscriminatory way in the past and that you can appeal to someone if you can demonstrate it has made a mistake. And if the system is based on pseudoscientific claptrap, you should not be judged by it at all.
Federal agencies like the Equal Employment Opportunity Commission can either address these problems under existing law or propose statutory language to grant them the authority to do so. But they need to act more aggressively now, while the technology is still developing. The White House needs to bring agency leaders together to learn from experts about best practices, and solicit comments from those affected by AI. This approach would both democratize and professionalize US technology policy in crucial areas.
AI developers should not simply “move fast and break things,” to quote an early Facebook motto. Real technological advance depends on respect for fundamental rights, ensuring safety and banning particularly treacherous uses of artificial intelligence. The EU is now laying the intellectual foundations for such protections, in a wide spectrum of areas where advanced computation is now (or will be) deployed to make life-or-death decisions about the allocation of public assistance services, the targets of policing and the cost of credit. While its regulation will never be adopted verbatim by the United States, there is much to learn from its comprehensive approach.