Artificial Intelligence ban slammed for failing to address "vast abuse potential"

Artificial Intelligence ban slammed for failing to address “vast abuse potential”

A written proposal to ban several uses of artificial intelligence (AI) and to place new oversight on other “high-risk” AI applications—published by the European Commission this week—met fierce opposition from several digital rights advocates in Europe.

Portrayed as a missed opportunity by privacy experts, the EU Commission’s proposal bans four broad applications of AI, but it includes several loopholes that could lead to abuse, and it fails to include a mechanism to add other AI applications to the ban list. It deems certain types of AI applications as “high-risk”—meaning their developers will need to abide by certain restrictions—but some of those same applications were specifically called out by many digital rights groups earlier this year as “incompatible with a democratic society.” It creates new government authorities, but the responsibilities of those authorities may overlap with separate authorities devoted to overall data protection.

Most upsetting to digital rights experts, it appears, is that the 107-page document (not including the necessary annexes) offers only glancing restrictions on biometric surveillance, like facial recognition software.

“The EU’s proposal falls far short of what is needed to mitigate the vast abuse potential of technologies like facial recognition systems,” said Rasha Abdul Rahim, Director of Amnesty Tech for Amnesty International. “Under the proposed ban, police will still be able to use non-live facial recognition software with CCTV cameras to track our every move, scraping images from social media accounts without people’s consent.”

AI bans

Released on April 21, the AI ban proposal is the product of years of work, dating back to 2018, when the European Commission and the European Union’s Member States agreed to draft AI policies and regulations. According to the European Commission, the plan is meant to not just place restrictions on certain AI uses, but to also allow for innovation and competition in AI development.

“The global leadership of Europe in adopting the latest technologies, seizing the benefits and promoting the development of human-centric, sustainable, secure, inclusive and trustworthy artificial intelligence (AI) depends on the ability of the European Union (EU) to accelerate, act and align AI policy priorities and investments,” the European Commission wrote in its Coordinated Plan on Artificial Intelligence.

The proposal includes a few core segments.

The proposal would ban, with some exceptions, four broad uses of AI. Two of those banned uses include the use of AI to distort a person’s behavior in a way that could cause harm to that person or another person; one of those two areas focuses on the use of AI to exploit a person or group’s “age, physical or mental disability.”

The proposal’s third ban targets the use of AI to create so-called social credit scores that could result in unjust treatment, a concern that lies somewhere between the haphazard systems implemented in some regions of China and the dystopic anthology series Black Mirror.

According to the proposal, the use of AI to evaluate or classify the “trustworthiness” of a person would not be allowed if those evaluations led to detrimental or unfavorable treatment in “social contexts which are unrelated to the contexts in which the data was originally generated or collected,” or treatment that is “unjustified or disproportionate to their social behavior or its gravity.”

The proposal’s final AI ban would be against “’real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement,” which means police could not use tools like facial recognition in real-time at public events, with some exceptions.

Those exceptions include the “targeted search” for “specific” potential victims of crime, including missing children, and the prevention of “specific, substantial, and imminent threat to the life or physical safety of natural persons, or of a terrorist attack.” Law enforcement could also use real-time facial recognition tools to detect, locate, identify, or prosecute a “perpetrator or suspect” of a crime of a certain severity.

According to Matthew Mahmoudi, a researcher and adviser for Amnesty Tech, these exceptions are too broad, as they could still allow for many abuses against certain communities. For instance, the exception that would allow for real-time facial recognition to be used “on people suspected of illegally entering or living in a EU member state… will undoubtedly be weaponised against migrants and refugees,” Mahmoudi said.

Aside from the proposal’s exceptions, it is the bans themselves that appear quite limited when compared to what is happening in the real world today.

As an example, the proposal does not ban post-fact facial recognition by law enforcement, in which officers could collect video imagery after a public event and run facial recognition software on that video from the comfort of their stations. Though the EU Commission’s proposal of course applies to Europe, this type of practice is already rampant within the United States, where police departments have lapped up the offerings of Clearview AI, the facial recognition company with an origin story that includes coordination with far-right extremists.

The problem is severe. As uncovered in a Buzzfeed investigation this year:

“According to reporting and data reviewed by BuzzFeed News, more than 7,000 individuals from nearly 2,000 public agencies nationwide have used Clearview AI to search through millions of Americans’ faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members.”

Buzzfeed found similar police activity in Australia last year, and on the very same day that the EU Commission released its proposal, Malwarebytes Labs covered a story about the FBI using facial recognition to identify a rioter at the US Capitol on January 6.

This type of activity is thriving across the world. Digital rights experts believe now is the best chance the world has to stamp it out.

But what isn’t banned by the proposal isn’t necessarily unrestricted. In fact, the proposal simply creates new restrictions based on other types of activities it deems “high-risk.”

High-risk AI and oversight

The next segment of the proposal places restrictions on “high-risk” AI applications. These uses of AI would not be banned outright but would instead be subject to certain oversight and compliance, much of which would be performed by the AI’s developers.

According to the proposal, “high-risk” AI would fall into the following eight, broad categories:

  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, workers management, and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum, and border control management
  • Administration of justice and democratic processes

The proposal clarifies which types of AI applications would be considered high-risk in each of the given categories. For instance, not every single type of AI used in education and vocational training would be considered high-risk, but those that do qualify would be systems “intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions.” Similarly, AI systems used for employment recruiting—particularly those used to advertise open positions, screen applications, and evaluate candidates—would be classified as high-risk under the broader category of AI for employment, workers management, and access to self-employment.

Here, again, the proposal angered privacy experts.

In January of this year, 61 civil rights groups sent an open letter to the European Commission, asking that certain applications of AI be considered “red lines” that should not be crossed. The groups, which included Access Now, Electronic Privacy Information Center, and Privacy International, wrote to “call attention to specific (but non-exhaustive) examples of uses that are incompatible with a democratic society and must be prohibited or legally restricted in the AI legislation.”

Of the five areas called out as too dangerous to permit, at least three are considered as “high-risk” by the European Commission’s proposal, including the use of AI for migration management, for criminal justice, and for pre-predictive policing.

The problem, according to the group Access Now, is that the proposal’s current restrictions for high-risk AI would do little to actually protect people who are subject to those high-risk systems.

Per the proposal, developers of these high-risk AI systems would need to comply with several self-imposed rules. They would need to establish and implement a “risk management system” that identifies foreseeable risks. They would need to draft up and keep up to date their “technical documentation.” They would need to design their systems to implement automatic record-keeping, ensure transparency, and allow for human oversight.

According to the European Digital Rights (EDRi) association, these rules put too much burden on the developers of the tools themselves.

“The majority of requirements in the proposal naively rely on AI developers to implement technical solutions to complex social issues, which are likely self-assessed by the companies themselves,” the group wrote. “In this way, the proposal enables a profitable market of unjust AI to be used for surveillance and discrimination, and pins the blame on the technology developers, instead of the institutions or companies putting the systems to use.”

Finally, the proposal would place some oversight and regulation duties into the hands of the government, including the creation of an “EU database” that contains information about high-risk AI systems, the creation of a European Artificial intelligence Board, and the designation of a “national supervisory authority” for each EU Member State.

This, too, has brought pushback, as the regulatory bodies could overlap in responsibility with the European Data Protection Board and the Data Protection Authorities already designated by each EU Member State, per the changes implemented by the General Data Protection Regulation.

What next?

Though AI technology races ahead, the EU Commission’s proposal will likely take years to implement, as it still needs to be approved by the Council of the European Union and the European Parliament to become law.

Throughout that process, there are sure to be many changes, updates, and refinements. Hopefully, they’re for the better.

ABOUT THE AUTHOR

David Ruiz

Pro-privacy, pro-security writer. Former journalist turned advocate turned cybersecurity defender. Still a little bit of each. Failing book club member.