Welcome back to Tech Chronicle. Do you recognize this newsletter as a fine one to subscribe to? I’ll face up to that.
You’ve got to recognize
IBM faced up to it.
The tech company is no longer developing facial-recognition software, CEO Arvind Krishna told U.S. lawmakers in a letter Monday. Krishna called for a “national dialogue” on how — or even whether — police should be allowed to use face-scanning technologies.
IBM and other tech companies had been criticized for racial bias in their facial-recognition software, something they pledged to fix. But IBM is taking an intriguing stance: that the social ills of highly effective facial-recognition software deployed on a massive scale, along with mass surveillance or crowdsourced imagery, are just too great. Such technology, Krishna’s move suggests, is inherently biased against the disadvantaged. Instead of being too big to fail, it’s too bad to fix.
There are always those who are willing to test the boundaries of what technology can do and what society will allow, like Clearview AI, the facial-recognition startup whose transgressive use of images posted on social media and other websites has already drawn privacy lawsuits.
It’s become clear that machine learning, applied to the massive corpus of images ubiquitous smartphone and CCTV cameras have captured, will get better and better over time, possibly at an exponential rate. People thought masks worn to fend off the coronavirus might also defeat facial recognition software; instead, fed on a diet of real and simulated images of masked faces, the algorithms rapidly adapted.
We have to think beyond just faces. As Andreas Weigend, author of “Data for the People,” has pointed out, AI-equipped security cameras can identify someone by their gait, and a smartphone can capture fingerprints from almost 10 feet away.
What we need here are things we are often short on: restraint and common sense. There’s a role here for legislation and regulation, certainly. But don’t count out the power of private action here.
What if the most promising AI engineers took a vow — something like the Hippocratic Oath — not to work on projects that could harm the vulnerable and strengthen the already powerful? What if employers competed to hire those engineers based on the strength of their ethical guidelines, rather than the growth in their government contracts? What if cities courted both companies and workers based on their policies to restrict unethical use of technology? (San Francisco and Oakland were the first and third U.S. cities to ban use of facial recognition by local agencies.)
What we don’t want to do is let potentially dangerous technologies race ahead of our ability to grasp them, let alone regulate them.
— Owen Thomas, email@example.com
Quote of the week
“Americans don’t want to be at the mercy of speech police, online or anywhere else.” — Sen. Ron Wyden, D-Ore., writing in CNN Business on proposals to weaken Section 230 of the Communications Decency Act
Adobe reports earnings Thursday. Since the San Jose software company has increasingly shifted its business to digital marketing, its quarterly results might give a glimpse of how much e-commerce has risen during shelter-in-place.
What I’m reading
Tom Krazit profiles Microsoft security chief Bret Arsenault, skier, race-car driver and password killer. (Protocol)
Chase DiFeliciantonio on a securities fraud case that could be the first involving the coronavirus. (San Francisco Chronicle)
Sarah Emerson on the new controversies around Menlo Park’s Facebook-funded police unit. (OneZero)