Clearview AI shows government needs to be agile in making tech regulation: experts

Shruti Shekar
Telecom & Tech Reporter
GETTY

Privacy and tech experts say that governments must be agile in creating laws to protect their citizens from ethically dubious applications of artificial intelligence (AI). 

On January 18, The New York Times reported that a company called Clearview AI is working with hundreds of law enforcement agencies in the U.S., including the F.B.I. 

The company allows users to take a picture of a person, and if the photo matches a face in its three billion image database, it can potentially provide information like names, addresses and other details. 

The three billion photos are harvested from Facebook, Venmo, YouTube, and other sites. 

Ann Cavoukian, former information and privacy commissioner of Ontario, said in an interview that none of the images that were harvested were obtained with the consent of users and “law enforcement should know better.”

She said that Canada needs “massive major laws to prevent” a similar situation occurring.

“We should have started years ago, we haven’t; so fine, let’s start now,” she said, adding that other jurisdictions in the U.S., like San Francisco and Oakland, Calif. have banned the use of facial recognition by law enforcement. 

Cavoukian’s biggest concern is whether or not the Royal Canadian Mounted Police (RCMP) is using the software. 

In an email, a spokesman for the RCMP did not confirm or deny if it was using Clearview AI as a tool for investigations. 

“Generally, the RCMP does not comment on specific investigative tools or techniques. However, we continue to monitor new and evolving technology.”

Yahoo Finance Canada asked Clearview AI if it works with the RCMP or if it plans to partner with any Canadian authorities. The company did not respond by press time. 

Stephanie Carvin, a security expert and assistant professor at Carleton University, said in an interview that the technology space evolves quickly and the government lacks agility in creating policy to regulate it.

“Impulses are always in reactive mode,” she said.

Carvin said stories like the Times report on Clearview AI undermines trust in AI and its applications, when it could potentially be used for something good, like finding a perpetrator.

The federal government’s House of Commons Standing Committee on Access to Information, Privacy, and Ethics was studying the ethics of artificial intelligence before the fall federal election. 

Nathaniel Erskine-Smith, a Liberal MP who used to sit on the committee, said in an interview that the government has an algorithmic impact assessment framework established, but it has not been implemented yet. Until it is, he said “[police] agencies shouldn’t be employing AI where there are obvious [negative] effects on civil liberties.”

Erskine-Smith said that the Digital Charter, which was introduced by Innovation, Science, and Industry Minister Navdeep Bains last year, will help lawmakers establish rules to regulate technologies like AI.

The Personal Information Protection and Electronic Documents Act states that any organization has to obtain an individual’s consent if they are collecting or using any individual’s personal information. The act does not clarify rules around the use of AI and collecting data for the use of facial recognition. 

Ramona Pringle, an associate professor at Ryerson University, said because the government hasn’t been able to keep up with the changes and advancements in technology, companies are “essentially running rogue.”

She said it was important for regulations to be put in place fast because without solid regulation, individuals can be vulnerable to nefarious uses of AI.

“I don’t know what individuals are supposed to do once the technology is around. We can’t put the genie back in the bottle. We can’t turn back time,” she said. “Our responsibility is regulation.”

According to the Times, Clearview AI has not made its software available for public use. 

According to a BuzzFeed report, Clearview AI claimed that its technology helped the New York Police Department capture an alleged terrorist, but the department denied those claims. 

Pringle emphasized that while AI has great use-case scenarios, like that of trying to catch perpetrators of crime, the technology is not as advanced as it is sometimes presented. 

“AI right now is just snake oil, and everyone is jumping on the bandwagon,” she said. “There’s been billions of dollars poured into AI, so many startups that say ‘this is powered by AI’ but we haven’t necessarily seen the great… breakthrough. The power isn’t necessarily artificial intelligence, but that it is still built upon the infrastructure of a decade or more of data collection by other companies.”