The attack on dozens of smart lampposts by chainsaw-wielding protestors in Kwun Tong last year killed off many of Hong Kong’s “smart city” ambitions, but a parallel police project to investigate the use of Artificial Intelligence (AI) in traffic safety continued undeterred. Seeing the way the wind was blowing, the HK$2.2-million publicly-funded development project and its three-camera system simply packed up its operations in protest-hit Central, moved to a new, quieter, location in Sha Tin and began recalibrating its machine learning algorithms to its new environment.
There’s nothing intrinsically sinister about the system, which uses three basic cameras linked to a machine learning service. Developed with police funding at Cyperport’s Logistics and Supply Chain Multitech R&D Centre (LSCM), the AI backbone is designed to, in the first phase, learn to identify different vehicle types and also identify traffic issues such as the blocking of yellow box junctions.
Processing is done in situ, according to project leader Dr Dorbin Ng, with no raw camera data uploaded to police. What police eventually will get, says Ng, is a stripped-down view of the street. “We can say, ‘there were two taxis here, there was a bus there.’ There won’t be any identifying data involved,” says Ng. Nor, at this stage, is any law enforcement action linked to the system, he says.
But that’s only the first stage: as the system develops, machine learning will be used to identify behaviours associated with traffic offences: ultimately it will allow automatic tickets to be given for some types of illegal parking and offences such as blocking yellow box junctions.
Once fully up and running, police will have control of the system and the data it provides, according to a police source who requested anonymity.
“Benefits are that it offers a 24/7 enforcement presence at identified blackspots (for whatever type of offence,” they say, “where it’s impossible to station police officers or traffic wardens around the clock.”
The system won’t replace police officers, but augment the work of traffic police. “It won’t result in any reduction of human resources as there is plenty of work for them to do in areas that automated systems cannot handle, for example, less than clear-cut offences, where cameras cannot get a good view.”
The system is smart – even elegant, in a technical AI sense – and there will be significant road safety benefits. There were over 20,000 deaths or injuries on Hong Kong’s roads in 2019, and according to the World Health Organisation, road traffic deaths are now globally the leading cause of death for the 5-29 age group. And while illegal parking may seem mundane and harmless, such behaviour has an insidious impact on road carnage: illegally parked cars often force pedestrians into the road and also block driver and pedestrian sight lines.
But despite the obvious advantages of an automated system for fighting the parking menace, such a system will also no doubt raise privacy concerns in a city already tense from the government’s smart lamppost surveillance backlash. And while the police project is not related to smart lampposts, it is still, at heart, a camera on a post, something Hong Kong’s privacy tsars have been cracking down upon since July last year, even before protestors took hammers, chainsaws and angle-grinders to Kwun Tong’s smart lampposts.
Smart lamppost woes
It was political party Demosistō that stoked the rumour that Hong Kong’s pilot smart lampposts hid technology to track people’s movements through systems linked to Smart ID cards. The fake news spread like wildfire in Hong Kong’s tinderbox chatrooms and forums, and pilot lampposts were slain faster than Frankenstein’s monster. On 24 August 2019, twenty lampposts along Sheung Yuet Road were damaged (one fully felled); a few weeks later, a further 10 were damaged near Kwun Tong town centre. Bluetooth supplier TickTack pulled out of the project, citing threats to the personal safety of family members of directors, something Hong Kong’s Innovation and Technology Bureau said was “a serious blow to the hard work of the local innovation and technology industry”.
But smart lampposts had actually been under fire well before the first physical attack: on 16 July 2019, responding to fast-spreading online rumours, the Office of the Government Chief Information Officer (OGCIO) had demanded that any camera functions were switched off. None of the 200 pilot lampposts would use cameras to track illegal waste dumping, traffic flows or traffic speed, with smart lampposts throttled back to simple weather and environmental monitoring, said OGCIO, responding to public concerns.
OGCIO also pledged a new committee to investigate the governance and operation of the lampposts, and held its first meeting on 12 August 2019, just 12 days before the Demosistō rumour sparked the anti-lamppost action. The committee met a total of seven times, concluding its final report in early March, and recommending that only the fuzziest of visual technologies be used to keep watch over the streets. Cameras were out; low-resolution radar or thermal imaging – the same technology used to check office workers for coronavirus fevers – were in.
Some local activists expressed regret at the episode, claiming the loss of traffic monitoring was an environmental setback and a failure for road safety. But nobody, not even the OGCIO, it seems, knew the police had been running its own AI version of the smart lamppost system, in parallel, for at least a year.
OGCIO told Transit Jam it had no knowledge of or involvement with the police LSCM project, and that it was not related to smart lampposts.
Steven Feldstein, author of The Global Expansion of AI Surveillance, has warned of the potential dangers in governments’ adoption of these new technologies: and although they are not nefarious in themselves, they open up a lot of potential harms. “AI surveillance allows regimes to automate many tracking and monitoring functions formerly delegated to human operators,” he writes. “AI technology can cast a much wider surveillance net than traditional methods. Unlike human operatives, with limited reserves of time and attention, AI systems never tire or fatigue. As a result, this creates a substantial ‘chilling effect’.”
But a police source says there are privacy benefits from the police approach. “An automated system, such as this, is actually better for personal data privacy, as it only gives/saves contravention data. Anything unused is automatically overwritten,” they say. So police, for example, wouldn’t have access to the camera images to hunt down or identify suspects in the area following a crime… although the system could give a ticket to an illegally parked getaway car, creating an official police record of that vehicle’s location at a specific date and time, which could be useful to prosecutors.