The age of robot surveillance is around the corner and the watchers will soon far outnumber the watched.
“Smart” traffic cameras that use artificial intelligence to try to spot people using cell phones while driving are being rolled out in Australia. The devices take a high-resolution photograph through the front windshield of each passing vehicle, and also capture its license plate. Each photograph is then analyzed by an AI algorithm. If the algorithm decides that the driver is touching a mobile phone, tablet, or another device, it then forwards the photograph to a human reviewer who confirms the violation and issues a citation to the car’s registered owner along with a hefty fine.
This technology represents one of the first significant examples of something that we have warned may become common: the use of smart surveillance cameras to take the place of human police officers in visually enforcing rules and regulations of all kinds. Except these devices won’t just take the place of human officers; they’ll make it possible to greatly increase the scale and pervasiveness of enforcement agents. No jurisdiction is going to station three human police officers on every highway mile and city block to do nothing but look for and issue citations to distracted drivers — but with AI cameras, the equivalent could easily be done.
The age of robot surveillance is around the corner and the watchers will soon far outnumber the watched. The “mobile phone detection cameras” being deployed in Australia are made by a company called Acusensus, which says that its system can detect texting drivers at night, in all weather conditions, through sun glare, and at high speeds. According to the company, the “system hardware is compact and unobtrusive” — meaning easy to hide — and “detection can be performed in real-time to assist police operations.”
The company is currently pitching its product in the United States and Canada, though I have not heard of a deployment in the United States so far (and the company’s web site does not boast about such a deployment, as we would expect). I am not sure how many other companies sell competing products, though I would expect that any company with expertise in computer vision could develop a product relatively easily.
Certainly, the use of mobile phones by drivers is a very serious problem. As I’ve long pointed out, driving cannot be seen as a purely individualistic activity. What we do with and in our cars affects not just our safety but the safety of other people — and the amount of carnage on our roadways each year is devastating. As a result, driving is already a highly regulated activity. There is also substantial evidence that smartphone use while driving contributes significantly to that human toll.
But the arrival of this kind of AI monitoring technology presents us with larger decisions that we’re going to have to make as a society. Currently, cars are often considered quasi-private spaces, where people do all kinds of things, from eating to applying makeup to changing their clothes to — yes — looking at their cellphones.
We could decide as a society that the dangers of distracted driving are so high that we don’t want the interiors of our cars to be at all private, and declare them fair game for high-resolution photography that can be scrutinized by government officials. We have no independent information about how accurate the Australian systems is, or how others like them will be, though some false positives are inevitable. That means that every driver will be subject to having their photograph randomly scrutinized by the authorities.
We should expect that these devices will be able to pick up other things besides texting. Already the Australian vendor boasts that the system can be set to flag behaviors including “eating, drinking and smoking, adjusting vehicle settings (radio, etc.), and use of mobile and navigation devices in a holder.” Whether the AI can discriminate between a driver drinking a beer and a root beer is unclear, which means that a swig of any beverage behind the wheel could get a photo of you scrutinized by the authorities.
Photographs may expose other things as well, from the presence of guns or drugs to on-the-road sexual activities, as well as private things like reading material, intimate personal effects, and passengers and drivers adjusting their clothes in ways that reveal their bodies at times they reasonably believe they can’t be seen by others. In the absence of tight controls over the handling of photographs, some revealing photographs will inevitably be saved and shared for voyeuristic purposes by those whose job it is to review them.
In Australia, the vendor says that its system shows only images of the drivers, not passengers, to the human reviewers, though we don’t know how reliable the automated redaction of photos is, or whether other vendors would also follow this practice. In media reports (though not on its web site), the company also says it quickly deletes photographs in which the AI finds no sign of a violation. But in New South Wales, 8.5 million photographs were taken in just a six-month period; that kind of photographic database might prove valuable in all kinds of ways that a for-profit company would want to exploit. A system with the power of this one should never be deployed with privacy protections that depend on the promises and voluntary practices of a company; it should be subject to statutory protections.
If we decide as a society to allow these devices to be deployed, we might require that drivers be given notice of their locations so that they can adjust their behavior. Or, we might allow them to be deployed without public notice to better deter dangerous behavior. That would create a “panopticon effect” in which everybody must act as if they are being scrutinized by the authorities at every moment since they never know at what moments they actually will be, creating in drivers “a state of conscious and permanent visibility.”
That would represent a fairly significant change in what it is like to drive in America. If we make a decision as a society to routinely extend the eye of the state into the interior of our vehicles in this way, that is a decision that a) should be known to all, and b) made through transparent democratic processes. The decision should not be made by police departments unilaterally throwing the technology into our public spaces without asking or even telling the communities they serve. That is something we’ve seen happen with too many other technologies, including license plate scanners, aerial surveillance, and face recognition. In cities where our recommended “Community Control Over Police Surveillance” legislation has been enacted, democratic review will be required, but police departments in every city and state should leave this decision to the communities they serve.
The other thing we must consider if we decide to permit this technology to be used is where things will go from there. Already a number of companies are selling in-vehicle “fleet cameras” designed to monitor employees who drive for a living, subjecting those workers to constant robot surveillance and judgment. Personal vehicles, too, are beginning to feature cameras that monitor drivers for distraction or drowsiness.
And AI smart cameras may well end up covering much more mundane behaviors. We could find ourselves fined for such offenses as cutting the edge of a crosswalk or putting materials in the wrong recycling bin. (That latter scenario is not such a stretch; some municipal governments in the United States have already equipped garbage trucks with video cameras that monitor the bins being emptied at each residence to determine if the right materials are coming out of each container, facilitating fines for noncomplying residents.)
Aside from privacy issues, these cameras would also raise other questions:
• Would there be racial bias in their deployment patterns or in the adjudications that human reviewers make of ambiguous photos?
• Would decisions to charge based on photos be made by sworn police officers only? With red-light cameras, we saw deployments that gave vendors a role in deciding guilt and innocence — and running the program in ways that created financial incentives to increase tickets.
• Would the cameras be fair? Unlike a citation issued by a live officer, automated accusations arrive by mail (if they arrive at all) long after the alleged violation. That makes it harder for people to recollect the circumstances of the violation to dispute a charge based on errors or extenuating circumstances.
• As with red-light cameras, there are also fairness questions around the fact that a car’s owner is the one cited when someone else could have been driving it.
Stopping texting drivers to lower traffic deaths is the kind of sympathetic goal that new surveillance technologies are always first deployed to address. But, as we consider going down that road, we need to figure out where we will draw the line against automated surveillance, lest we end up being monitored by armies of digital sticklers scolding, flagging, and fining us at every turn.
“Smart” traffic cameras that use artificial intelligence to try to spot people using cell phones while driving are being rolled out in Australia. The devices take a high-resolution photograph through the front windshield of each passing vehicle, and also capture its license plate. Each photograph is then analyzed by an AI algorithm. If the algorithm decides that the driver is touching a mobile phone, tablet, or another device, it then forwards the photograph to a human reviewer who confirms the violation and issues a citation to the car’s registered owner along with a hefty fine.
This technology represents one of the first significant examples of something that we have warned may become common: the use of smart surveillance cameras to take the place of human police officers in visually enforcing rules and regulations of all kinds. Except these devices won’t just take the place of human officers; they’ll make it possible to greatly increase the scale and pervasiveness of enforcement agents. No jurisdiction is going to station three human police officers on every highway mile and city block to do nothing but look for and issue citations to distracted drivers — but with AI cameras, the equivalent could easily be done.
The age of robot surveillance is around the corner and the watchers will soon far outnumber the watched. The “mobile phone detection cameras” being deployed in Australia are made by a company called Acusensus, which says that its system can detect texting drivers at night, in all weather conditions, through sun glare, and at high speeds. According to the company, the “system hardware is compact and unobtrusive” — meaning easy to hide — and “detection can be performed in real-time to assist police operations.”
The company is currently pitching its product in the United States and Canada, though I have not heard of a deployment in the United States so far (and the company’s web site does not boast about such a deployment, as we would expect). I am not sure how many other companies sell competing products, though I would expect that any company with expertise in computer vision could develop a product relatively easily.
Certainly, the use of mobile phones by drivers is a very serious problem. As I’ve long pointed out, driving cannot be seen as a purely individualistic activity. What we do with and in our cars affects not just our safety but the safety of other people — and the amount of carnage on our roadways each year is devastating. As a result, driving is already a highly regulated activity. There is also substantial evidence that smartphone use while driving contributes significantly to that human toll.
But the arrival of this kind of AI monitoring technology presents us with larger decisions that we’re going to have to make as a society. Currently, cars are often considered quasi-private spaces, where people do all kinds of things, from eating to applying makeup to changing their clothes to — yes — looking at their cellphones.
We could decide as a society that the dangers of distracted driving are so high that we don’t want the interiors of our cars to be at all private, and declare them fair game for high-resolution photography that can be scrutinized by government officials. We have no independent information about how accurate the Australian systems is, or how others like them will be, though some false positives are inevitable. That means that every driver will be subject to having their photograph randomly scrutinized by the authorities.
We should expect that these devices will be able to pick up other things besides texting. Already the Australian vendor boasts that the system can be set to flag behaviors including “eating, drinking and smoking, adjusting vehicle settings (radio, etc.), and use of mobile and navigation devices in a holder.” Whether the AI can discriminate between a driver drinking a beer and a root beer is unclear, which means that a swig of any beverage behind the wheel could get a photo of you scrutinized by the authorities.
Photographs may expose other things as well, from the presence of guns or drugs to on-the-road sexual activities, as well as private things like reading material, intimate personal effects, and passengers and drivers adjusting their clothes in ways that reveal their bodies at times they reasonably believe they can’t be seen by others. In the absence of tight controls over the handling of photographs, some revealing photographs will inevitably be saved and shared for voyeuristic purposes by those whose job it is to review them.
In Australia, the vendor says that its system shows only images of the drivers, not passengers, to the human reviewers, though we don’t know how reliable the automated redaction of photos is, or whether other vendors would also follow this practice. In media reports (though not on its web site), the company also says it quickly deletes photographs in which the AI finds no sign of a violation. But in New South Wales, 8.5 million photographs were taken in just a six-month period; that kind of photographic database might prove valuable in all kinds of ways that a for-profit company would want to exploit. A system with the power of this one should never be deployed with privacy protections that depend on the promises and voluntary practices of a company; it should be subject to statutory protections.
If we decide as a society to allow these devices to be deployed, we might require that drivers be given notice of their locations so that they can adjust their behavior. Or, we might allow them to be deployed without public notice to better deter dangerous behavior. That would create a “panopticon effect” in which everybody must act as if they are being scrutinized by the authorities at every moment since they never know at what moments they actually will be, creating in drivers “a state of conscious and permanent visibility.”
That would represent a fairly significant change in what it is like to drive in America. If we make a decision as a society to routinely extend the eye of the state into the interior of our vehicles in this way, that is a decision that a) should be known to all, and b) made through transparent democratic processes. The decision should not be made by police departments unilaterally throwing the technology into our public spaces without asking or even telling the communities they serve. That is something we’ve seen happen with too many other technologies, including license plate scanners, aerial surveillance, and face recognition. In cities where our recommended “Community Control Over Police Surveillance” legislation has been enacted, democratic review will be required, but police departments in every city and state should leave this decision to the communities they serve.
The other thing we must consider if we decide to permit this technology to be used is where things will go from there. Already a number of companies are selling in-vehicle “fleet cameras” designed to monitor employees who drive for a living, subjecting those workers to constant robot surveillance and judgment. Personal vehicles, too, are beginning to feature cameras that monitor drivers for distraction or drowsiness.
And AI smart cameras may well end up covering much more mundane behaviors. We could find ourselves fined for such offenses as cutting the edge of a crosswalk or putting materials in the wrong recycling bin. (That latter scenario is not such a stretch; some municipal governments in the United States have already equipped garbage trucks with video cameras that monitor the bins being emptied at each residence to determine if the right materials are coming out of each container, facilitating fines for noncomplying residents.)
Aside from privacy issues, these cameras would also raise other questions:
• Would there be racial bias in their deployment patterns or in the adjudications that human reviewers make of ambiguous photos?
• Would decisions to charge based on photos be made by sworn police officers only? With red-light cameras, we saw deployments that gave vendors a role in deciding guilt and innocence — and running the program in ways that created financial incentives to increase tickets.
• Would the cameras be fair? Unlike a citation issued by a live officer, automated accusations arrive by mail (if they arrive at all) long after the alleged violation. That makes it harder for people to recollect the circumstances of the violation to dispute a charge based on errors or extenuating circumstances.
• As with red-light cameras, there are also fairness questions around the fact that a car’s owner is the one cited when someone else could have been driving it.
Stopping texting drivers to lower traffic deaths is the kind of sympathetic goal that new surveillance technologies are always first deployed to address. But, as we consider going down that road, we need to figure out where we will draw the line against automated surveillance, lest we end up being monitored by armies of digital sticklers scolding, flagging, and fining us at every turn.
Maps Displaying Mounted Surveillance Cameras On
Interstate Highways And Streets Around Atlanta, Ga.
(Zoomed Out View)
(Zoomed In View)