“When it comes to issues such as safeguards for facial recognition, we have no national law at all,” Microsoftpresident Brad Smith wrote. “We need new laws fit for the future.” IBM CEO Arvind Krishna told Biden his company was “ready to work with you” on prohibiting use of the technology for “mass surveillance, racial profiling, or violations of basic human rights and freedoms.”
The suggestions follow a recent quantum leap in corporate interest in face recognition legislation. A WIRED review of congressional lobbying filings shows that mentions of the technology jumped more than four-fold from 2018 to 2019, and are on track to reach new heights in 2020.
It’s not just tech firms such as Amazon, Microsoft, and IBM lobbying on the issue. Companies and trade groups representing retailers, chip makers, cruise ships, wireless providers, and airlines have all sought lawmakers’ ears to talk about face algorithms.
The lobbying surge coincides with the spread of local and state bans and restrictions on face recognition across the US, from Portland, Oregon, to Portland, Maine. Despite the sharp divisions and low productivity of Congress during the past four years, there’s bipartisan interest in restricting the technology in some way.
After a series of House Oversight Committee hearings that highlighted concerns about police surveillance and evidence that face recognition makes more errors on Black faces, conservative representative Jim Jordan (R-Ohio) joined Democrats in saying it was time to regulate law enforcement use of face algorithms. Several bills were introduced in both the Senate and House by lawmakers from both sides of the aisle in the past two years, including a recent Democratic proposal to halt federal use of the technology. Lobbying filings don’t reveal companies’ specific policy desires, but Amazon, Microsoft, and IBM have spoken in favor of restricting rather than banning the technology.
The Trump administration didn’t show much interest in face algorithms, but the Biden administration might create a more supportive environment for regulating the technology, even if Democrats don’t win control of the Senate in January.
Jake Parker, senior director of government relations at the Security Industry Association, some of whose members sell face recognition, hopes the Biden administration will see rules for the technology as supporting the US artificial intelligence industry. “There’s a US leadership role in making sure that AI is used in beneficial ways,” Parker says. The SIA wants law enforcement to keep using the technology, but with greater transparency.
Daniel Castro, vice president at the Information Technology and Innovation Foundation, a Washington think tank that receives funding from the tech industry, says face recognition rules would fit with the Biden campaign’s interest in privacy and racial justice. “There was already an opening for potential action, and it could go further with the Biden administration,” he says. “This might be one of the items that they put on the agenda.”
Biden’s interest in racial justice is driven in part by the outpouring of protest in response to the killing of George Floyd by Minneapolis police this year. The incident prompted IBM to announce in June that it would no longer offer face recognition. Soon after, Microsoft said it would pause sales to law enforcement until federal legislation was in place. Amazon also halted law enforcement sales, but only for a year.
Chris Padilla, IBM’s vice president of government and regulatory affairs, says the company was already concerned about inaccuracies in face recognition but that the Floyd protests made the company more aware of the potential consequences of any errors. “We’re not confident these things can be made to work in a law enforcement context accurately enough,” he says.
IBM hasn’t taken a position on a federal ban of law enforcement use, but urges a national discussion of options for regulation. Padilla also says the government should ban export of the technology to authoritarian countries such as China.
“There was already an opening for potential action, and it could go further with the Biden administration.”
DANIEL CASTRO, VICE PRESIDENT, INFORMATION TECHNOLOGY AND INNOVATION FOUNDATION
Amazon directed WIRED to a 2019 blog post that suggested safeguards on law enforcement use of face recognition but also stated that “new technology should not be banned or condemned because of its potential misuse.” A Microsoft spokesperson said government rules for whether and how face recognition is used should be “grounded in human rights protections like privacy, freedom of expression, and freedom of association.”
Tech companies and some lawmakers broadly agree that face recognition rules are needed, but there won’t be easy agreement on what exactly the limits should be. A Washington state law passed in March, supported by Microsoft and introduced by a state senator who works for the company, illustrates some of the divisions.
Washington’s law requires government agencies to disclose information about their use of face-recognition technology and that technology’s accuracy on different demographics. It also requires “meaningful human review” when the technology is used for major decisions, and it prohibits law enforcement from using face algorithms on live videofeeds except in emergencies.
Microsoft called that law an “important model,” but it is more permissive than the outright bans on government use of face recognition passed in more than a dozen cities, including Boston and San Francisco. Portland passed a law that also bars use by private companies, over opposition from Amazon.
Jennifer Lee, technology and liberty project manager at ACLU of Washington, hopes the state’s law does not become a national model. “We need a strong state surveillance ban and to ensure agencies and companies can’t use facial recognition to profile people,” she says. “The bill that passed doesn’t include those measures.” The ACLU is working on a new Washington privacy bill and hopes to include a requirement that companies ask consumers to opt into face recognition before using the technology.
Some proposals floated in Congress have been more strict than Washington’s law. In 2019, senators Roy Blunt (R-Missouri) and Brian Schatz (D-Hawaii) introduced a bill that would require companies to obtain consent before collecting face-recognition data. Filings indicate that Amazon, IBM, and Microsoft all lobbied lawmakers on the bill. In June, a group of Democratic senators and representatives introduced a bill that would place a moratorium on the use of biometric technology, including face recognition, by federal agencies.
In the absence of federal regulation, face recognition has become easier to access and more widely used. That, and the range of industries lobbying in DC on the issue, suggests that hammering out ground rules will be complicated. “Facial recognition is increasingly the most effective, seamless way to identify people in many different kinds of applications,” says Castro, of ITIF.
Some invoke the pandemic as reason not to restrict it too tightly. Filings indicate that Airlines for America began lobbying on face recognition in 2018, but became more active in 2019. A spokesperson for the group says it wants to see a balance between protecting privacy and allowing technology that could make travel smoother, especially at a time when airports are more stressful than usual. “Airlines are committed to strong privacy principles as it relates to biometrics,” she says. “We also believe affording passengers the ability to experience a touchless process throughout their air travel journey will help to build confidence during the ongoing Covid-19 crisis.”
source: wired.com by TOM SIMONITE