Artificial intelligence (AI) has been an integral part of our world for decades, with its roots tracing back to the early 1950s when the first computer emerged, capable of storing information and executing human commands. Despite its long history, AI's transformative impact on various fields raises concerns and uncertainties among many. Notably, experts assert that the AI revolution in surveillance is already underway.
Businesses are increasingly leveraging AI capabilities to enhance analytic processing, while city officials deploy AI to monitor and manage diverse aspects such as traffic congestion and smart energy metering. However, a growing trend sees numerous states employing advanced AI surveillance tools to monitor, track, and surveil citizens for a range of policy objectives. These objectives can range from lawful pursuits to potential violations of human rights, creating a complex and ethically ambiguous landscape.
Privacy emerges as a fundamental concern within US society, intensified by the omnipresence of cameras in various forms, from stores and satellites to smartphones. The deployment of AI for security purposes has witnessed a significant uptick in recent years. Companies are actively incorporating AI into policing methods, deploying solutions like biometrics, facial recognition, smart cameras, and video surveillance systems. Studies are showing that crime rates may go down up to 30% with the use of these systems.
The universal presence of cameras and the adoption of AI for security raise critical questions about the balance between public safety and individual privacy. As these technologies become increasingly integrated into our daily lives, the need for robust ethical frameworks and legal safeguards becomes imperative. Striking a delicate balance between harnessing the benefits of AI for security and safeguarding individual rights is a challenge that requires careful consideration and proactive measures. The intersection of AI and surveillance demands a thoughtful and comprehensive approach to ensure that technological advancements align with societal values and ethical principles.
In addition to the policing methods, the widespread use of AI carries significant geopolitical implications, with China is indisputably one of the leaders in developing these technologies both for domestic and international use. However,noteworthy players include the United States, Israel, Russia, multiple European countries, Japan, and South Korea. The impact of these technologies extends far beyond borders, manifesting in diverse applications.
While some instances involve the surveillance of political dissidents and the suppression of Uyghur and Turkic Muslim populations in China, more commonplace uses, like one-to-one verification at banks and gyms, also raise valid concerns. The collection of high-quality data in these seemingly routine applications contributes to the enhancement of facial recognition technology, with potential authoritarian uses over time.
Efforts by the United States and partner democracies, such as sanctions, export controls, and investment bans, aim to curb the unbridled spread of surveillance technology. However, the opaque nature of supply chains complicates the assessment of the effectiveness of these measures. Notably, a significant gap exists at the international standards level, where Chinese companies dominate proposals for facial recognition standards at institutions like the United Nations' International Telecommunication Union (ITU).
In the ever-evolving landscape of AI and surveillance, international collaboration and the establishment of clear standards become paramount. As the world grapples with the ethical dimensions of these technologies, fostering transparency and ethical guidelines at the global level is crucial to ensuring a future where innovation aligns with democratic values and respects individual rights.
In the year 2027: China had been continually perfecting its full-fledged nationwide surveillance architecture in form of smart and secure cities as well as the social credit system. The results cannot be denied: thanks to artificial intelligence (AI), surveillance systems throughout the streets plaster the faces of jaywalkers on billboards and drivers of speeding cars are immediately informed that they are fined, leading to a new record low of traffic accidents.
At the same time, however, the government has employed AI surveillance systems as big-brother-type instruments of repression. For instance, AI tools have been honed to the degree that they can automatically grade—be it online or offline—the degree of comments critical of government and discipline their citizens according to their statements. The punishments might range from the reduction of social benefits up to forced work in detention camps. For non-nationals, entry bans have also already been applied preemptively. Civil society groups observe that these AI surveillance applications consolidate the robustness of authoritarian regimes, lead to anticipatory change in people’s behavior in favor of the government’s positions, and compromise heavily the human dignity of their citizens.
Western governments are in a tricky situation: the effectiveness and sophistication of these systems are convincing. On the downside, authoritarian states use AI surveillance to track and control the movements of their citizens and non-nationals, collect data about their faces and gaits, and reuse the information for repressive purposes. Meanwhile, Chinese companies, which are at the forefront in developing and employing these systems, have already been busy striking deals with several countries to export and install their smart city packages. Due to the lack of internal consolidation in Western states and cooperation between them, as well as the absence of a separate approach towards AI surveillance, containing the spread of these systems and their destructive side effects has not been successful.
Given the current state of AI surveillance as well as the speed of development, the above scenario is not an unrealistic Orwellian dystopia, but rather a potential continuation of current international trends. AI surveillance tools in various forms are spreading globally, from facial recognition and early outbreak detection to predictive policing and gait recognition. Despite different legal restrictions, authoritarian and democratic states alike are increasingly employing these instruments to track, surveil, anticipate, and even grade the behavior of their own citizens.
The application of these AI surveillance tools is a very important cornerstone of an emerging trend towards digital authoritarianism: the collection and application of information by states using digital tools to achieve repressive levels of societal control. These tools serve as exponential accelerants of preexistent surveillance practices, and state regimes might achieve unprecedentedly effective authoritarian rule. They could also strengthen the attractiveness of digitally driven, authoritarian practices for fragile democracies.
Because of its high technological ambitions and authoritarian outlook, China is at the leading edge of these trends and is confronted with the allegation of exporting ‘authoritarian tech’ to other states in order to expand its political and economic influence and advertise a governance model opposed to democratic notions. Against this backdrop, Western actors like the United States, which possesses the innovation edge in most technologies, as well as the European Union (EU) and its member states, face a difficult challenge: balancing the development, use, and export of AI surveillance systems while not abandoning democratic norms like their authoritarian counterparts.
The difficulty of this task, namely the effective use of technology on the one hand and preserving the privacy, human rights, and dignity of individuals on the other, was particularly apparent during the COVID-19 pandemic. Whereas some authoritarian states are using AI- and data-driven tracking systems to mitigate the crisis in an unrestricted fashion, a debate slowly rages in the West about state authorities’ using the crisis to inch towards a potential surveillance state.
Without a doubt, the pandemic revealed the risks of AI surveillance tools and has the potential to further accelerate the use of technologies for social control—especially in light of an ever-more data-intensive economy and society. Thus, the crisis presents an opportunity to kick-off an international debate on how to set boundaries and use technology benevolently.
Western governments must find a way to address this growing trend of AI surveillance, especially since it will be difficult to persuade and halt authoritarian regimes such as China from refraining from using AI surveillance—the presumed advantages are all too tempting. Here, the United States, the EU, and other liked-minded states should seize the opportunity of the increase of AI surveillance in the midst of the pandemic and must adopt a multi-layered approach: Western states have to first figure out for themselves the right balance between the effective use of AI surveillance systems and preserving the privacy, human rights, and dignity of their own citizens. Building on that, the West should present an alternative model to digital authoritarianism that comprises the use of AI surveillance tools for democratic ends. And last, a nuanced approach towards China and other authoritarian states employing these systems has to be developed and should encompasses the will to cooperate where possible and collectively sanction if necessary.
In order to shine light on the international trends in AI surveillance, the article will first describe the associated developments with a particular focus on China, the United States, and the EU, and it presents recommendations for enhancing an international approach to AI surveillance.
Defining AI surveillance
The AI Global Surveillance (AIGS) Index outlines three pivotal AI surveillance tools: smart/safe city platforms, facial recognition systems, and smart policing. These tools appear in different forms and are technically sophisticated and continuously evolving. For instance, along with facial recognition are speech and gait recognition. Irrespective of their fields of use, the advantages of these systems in the eyes of state authorities are manifold: cost efficiency, reduced reliance on human workers, precise data analyses, and, more broadly, unprecedented possibilities for societal control.
Three aspects concerning the characteristics of AI surveillance are noteworthy in that context: first, these surveillance tools per se are not unlawful, and their deployment always depends on their specific application as well as their societal context. For instance, AI surveillance tools can be used both on the battlefield and for wildlife preservation.
Second, while AI surveillance is one of the key elements of a growing digital authoritarianism trend, other digital instruments also fuel this globally spreading development, including Internet censorship and firewalls, government-made spyware, state-supported disinformation campaigns, and other forms of surveillance via drones or GPS tracking.
And third, there are diverging views concerning the ways in which and to what extent these tools should be used and deployed, illustrated in their development and application in the Chinese, US, and European contexts.
China
There are several reasons that China is globally at the forefront of the development, use, and export of these AI surveillance systems. First, in light of Beijing’s “A Next Generation Artificial Intelligence Development Plan” and its general push for technological supremacy, the country is home to several cutting-edge AI surveillance companies and so-called unicorns (start-ups with a current valuation of $1 billion USD or more). Large companies such as Huawei, Hikvision, Dahua, and ZTE are developing these technologies in various forms, from AI-based video surveillance to full-fledged smart city packages. AI startups such as SenseTime (valued at $7.5 billion), Megvii ($4 billion), CloudWalk, and Yitu (both $2 billion) are the leading global players in facial recognition technology. In general, Chinese surveillance companies are in a dominant position, and according to estimates, they “will have 44.59 % of the global facial recognition market share by 2023.”
Second, Chinese state authorities are striving to establish a social credit system— “a big-data-fueled mechanism, to become a powerful tool for enforcement of laws, regulations or other party-state targets […] The idea is to centralize data on natural persons and legal entities under a single identity (the Unified Social Credit Number), then rate them on the basis of that data, and treat them differently according to their behavior.” The system is neither completed nor nationwide yet. However, it is due to expand with the adoption of AI surveillance tools. The concept of collecting information about citizens in a centralized way is actually not that new, even among Western societies. The United States has criminal records and credit scores, and Member States of the EU have healthcare histories. What is new is that China is tracking and using types of data that most Western countries would refrain from. AI helps automate and scale surveillance, potentially enough to realize the worst and best tendencies. China’s strong AI-related industrial and technological sectors, authoritarian tendencies, government involvement in production and research, lax data privacy laws, enormous population, and even a certain degree of societal acceptance of state practices all create the perfect environment for AI surveillance development and deployment.
In that context, the current focal point of the international criticism of China’s AI surveillance usage is the Xinjiang region, which has become “an unfettered surveillance laboratory for surveillance giants to hone their skills and their technology platforms without the usual constraints.” The combination of the suppression of Uighur and other minorities on the one hand and the testing and deploying of cutting-edge technology on the other is one of the most striking examples of digital authoritarianism. Recent revelations in this context showed that Huawei has allegedly developed and tested a so-called “Uighur alarm”, an AI-based face-scanning camera system that can detect persons of the Muslim minority group and alert Chinese authorities in Xinjiang. According to the reports, Huawei has developed these AI surveillance tools in cooperation with several domestic security firms.
Third, Chinese companies lead the way in exporting AI surveillance technologies internationally to sixty-three recipient countries, with Huawei at the forefront of supplying at least fifty. Uganda, for example, acquired a nationwide system of surveillance cameras with facial recognition capabilities from Huawei in August 2019, and from 2018 onwards, state authorities in Zimbabwe have acquired facial recognition technologies from Hikivision and Cloud Walk for border security and mass surveillance. The gathered data will also be sent back to the Chinese companies’ headquarters, “allowing the company to fine-tune its software’s ability to recognize dark-skinned faces, which have previously proved tricky for its algorithms.” Other countries that have received technologies from Chinese companies include Eritrea, Kenya, Serbia, Sri Lanka, the Philippines, Uzbekistan, and Venezuela. Even though China leads in the global export of these technologies, opinions on whether Beijing has an intentional strategy for spreading digital authoritarianism as a new ideological blueprint vary. Regardless, experts fear that China will provide these technologies to other countries in the context of its Belt and Road Initiative (BRI) in order to conduct state espionage.
Chinese technology companies such as ZTE, Dahua, and China Telecom are eager to sway international standards bodies such as the International Telecommunication Union (ITU) for several AI surveillance forms, including facial recognition, video monitoring, and city and vehicle surveillance. The standards proposed by Chinese companies include broad application possibilities and rights for state authorities, like vast storage requirements for personal information and proposed application fields like cognition technology, from “the examination of people in public spaces by the police [to the] confirmation of employee attendance at work.
Irrespective of whether or not China is intentionally promoting digital authoritarianism via its export of AI surveillance tools, it is providing mechanisms for unprecedented societal control all over the world. Moreover, its domestic deployment of these tools and the notions it has presented to international standard bodies differ from the practices and ideals of liberal democracies.
The United States
The blatant use and export of AI surveillance systems by Beijing has become an issue in the US-Chinese tech confrontation. In January 2020, the then-US Defense Secretary Mark Esper said that China is becoming “a 21st century surveillance state with unprecedented abilities to censor speech and infringe upon basic human rights. Now, it is exporting its facial recognition software and systems abroad.” This opinion was echoed by members of Congress from both parties, including House Intelligence Committee Chairman Representative Adam Schiff and Senator Marco Rubio. Democratic Senator Brian Schatz even proposed to issue the “End Support of Digital Authoritarianism Act” in the summer of 2019. It would have barred companies from countries with a bad human rights record from participating in the Face Recognition Vendor Test (FRVT) held by the National Institute of Standards and Technology (NIST), known as the gold standard for measuring consistency of facial recognition software.
The most salient reaction by US authorities occurred in July 2019, when the Commerce Department put eight Chinese companies and twenty Chinese government agencies on the entity list. Those companies and agencies are accused of “human rights violations and abuses in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance against Uighurs, Kazakhs, and other members of Muslim minority groups in Xinjiang.” It is now prohibited for US companies to export high-tech equipment to the Chinese agencies and firms, among them the three facial recognition start-up unicorns SenseTime, Megvii, and Yitu, as well as the world-class surveillance camera manufacturers Dahua and Hikvision. However, some of these blacklisted companies have achieved to circumvent the sanctions and still export its products to Western countries.
However, the United States itself is confronted with accusations of hypocrisy: in the aftermath of the September 11 terror attacks, US intelligence services massively expanded their surveillance practices, which became apparent due to the Snowden revelations in 2013. Major US-based tech companies have also exported AI surveillance technologies all over the world. Surveillance systems have also been used by federal or state authorities beyond intelligence gathering—or instance, on the US-Mexico border, where “an array of high-tech companies purvey advanced surveillance equipment.” However, it would be misleading to draw parallels between the Chinese and US approaches: there are no signs that US authorities will deploy a similarly all-encompassing system to publicly surveil or even grade its citizens. Furthermore, companies headquartered in the United States are characterized by a largely transparent corporate structure within a rule-of-law framework as opposed to the Chinese model.
In general, federal and state authorities in the United States are at the very beginning of considering the regulation of AI surveillance deployments, especially facial recognition. When it comes to existing stipulations in those areas, there are varying degrees of restrictions and controls in cities and states in the United States, leading to a patchwork of regulation across the country. In San Francisco, San Diego, and Oakland, city agencies are banned from using facial recognition technologies, while other cities such as Detroit allow a restrained use of facial recognition by their police departments. Recently, the Portland City Council prohibited the public and private use of facial recognition technology.
In light of the absence of nation-wide regulations, federal lawmakers have embarked on passing national legislation on facial recognition technology. For instance, the “Ethical Use of Facial Recognition Act” was proposed by Senator Jeff Merkley, which would “forbid the use of the tech by federal law enforcement without a court-issued warrant until Congress comes up with better regulation.” It would also establish a commission to further assess facial recognition technology and propose guidelines. Discussion of regulating facial recognition technology has gained traction in Congressional hearings over the last year.
IBM was the first company to announce in a letter addressed to lawmakers that it will cease to sell, develop, or research general-purpose facial recognition technology. Microsoft and Amazon followed to some extent and announced their pause the sale of such technologies to police forces. Both tech giants publicly state that they wouldn’t offer facial-recognition technology to state and local police departments until proper national laws with respect of human rights are enacted, and both companies have already called on lawmakers for such regulation.
In sum, the United States has recently increased its attention to and activity around these issues at home and abroad. However, a value-based approach addressing the trend of digital authoritarianism has played a comparatively minor role in the context of the current technological rivalry with China. Further, the regulatory landscape of AI surveillance, in particular facial recognition technology, is still a patchwork across states and cities, and comprehensive national legislation is absent as of now.
The European Union
In terms of artificial intelligence and the digital realm in general, the European Union has been following a ‘human-centered approach,’ which it is eager to promote as a global and unique selling point. In the previous European Commission under President Jean-Claude Juncker (2014-2019), the High-Level Expert Group on Artificial Intelligence issued the Ethics Guidelines for Trustworthy Artificial Intelligence, promoting the idea that so-called trustworthy AI should be lawful, ethical, and robust. These guidelines already point to the necessity of “differentiating between the identification of an individual vs the tracing and tracking of an individual, and between targeted surveillance and mass surveillance”.
The new Commission under Ursula von der Leyen (2019-2024) has signaled that this human-centered approach will be further developed. In terms of AI surveillance, the leak of the first draft of the EC’s white paper on AI in January 2020 made headlines for the document’s envisioned five-year ban on the use of facial recognition systems in public areas. However, the official version issued one month later had watered down language with no mention of a potential ban. The document rather adopts a risk-based and sector-specific approach to set boundaries for AI systems, including facial recognition software. It says that the “gathering and use of biometric data of remote identification purposes, for instance through deployment of facial recognition in public areas, carries specific risks for fundamental rights.” In order to identify these risks and potential areas for regulation concerning remote biometric identification, “the Commission will launch a broad European debate on the specific circumstances, if any, which might justify such use, and common safeguards.” In that context, the white paper also puts particular emphasis on the existing data protection rules, for instance given by the EU GDPR (EU’s General Data Protection Regulation), since they only allow the processing of biometric data in very specific cases. Concrete EU legislative proposals on how to regulate AI applications are planned in the first half of 2021.
The white paper on AI and the EU in general falls short of addressing the global trend of digital authoritarianism. The EU follows an inward-looking approach but has still not displayed any grand aspirations to directly tackle the global element of the AI surveillance challenge. Against this backdrop, EU officials and high-ranking politicians from member states have been ratherhesitant to criticize China´s social credit system or the repressive use of AI surveillance in Xinjiang.
The balancing act between an outright ban and the restrained use of these technologies is seen in their implementation. A German case illustrates the difficulty of the trade-off between effective crime control and data privacy concerns. After the Federal Ministry of the Interior tested facial recognition cameras at the Berlin-Südkreuz train station, it wanted to deploy these systems to one hundred and thirty-four railway stations and fourteen airports all over Germany. However, after lawmakers from the opposition and even coalition partners and civil society protested these plans, they were put on hold.
The approach of French state authorities is less restrained. In October 2019, the Ministry of Interior announced its plans to use facial recognition technology in the framework of its national digital identification program, named Alicem. This would make France the first EU Member State to use the technologies for digital identity. However, the plans have provoked criticism from civil society, and questions have been raised about whether the deployment is in conformity with GDPR regulation. In another case, French regulators ruled against the use of facial recognition technology in high schools. In order to provide overall legal clarification, the French government announced in January 2020 that a legal framework for developing facial recognition for security and surveillance would be established soon. Furthermore, in the aftermath of the latest terror attacks in Nice in October 2020, several high-raking French politicians have called for the installment of AI surveillance tools in public spaces to tackle terrorism.
As with the situation in the United States, Europe is at the beginning of regulation and finding balance. However, with the AI white paper’s risk-based and sector-specific approach and GDPR, the EU has predetermined a potential framing for promoting its notions in the international debate.
Recommendations for starting the international debate
The COVID-19 pandemic was an important moment in the global use and potential containment of AI surveillance. As put by Nicolas Wright, a recognized tech-surveillance expert at the University College London (UCL): “Just as the September 11 attacks ushered in new surveillance practices in the United States, the coronavirus pandemic would do the same for many nations around the world. […] But neither the United States nor European countries have used the widespread and intrusive surveillance methods applied in East Asia.”
Therefore, the COVID-19 crisis and the recent global awareness of these issues should be taken as reason for the West to find a consolidated approach towards AI surveillance. The fact that globally operating US tech giants have also stopped or restricted their involvement in facial recognition technology might add momentum to the discussion.
Therefore, Western countries have to adapt a threefold approach: first, as seen with the domestic situation in the United States and the EU, Western governments and institutions must first find the right approach to regulate the application of AI surveillance systems domestically before engaging on the international level successfully. The AI legislative proposals expected from the European Commission will be important to set clear positions against the undemocratic use of AI surveillance tools. In the best case, these proposals will have imitation effects for other like-minded states. The US Congress should answer the calls of great tech companies and the civil society and push for federal legislation to overcome the current domestic patchwork of AI rules. However, this will be not a one-time exercise for EU and US authorities since the fast-paced developments of AI applications will need permanent and quick adaptions.
Second, Western states need to possess and present an alternative, human and democracy friendly model of AI surveillance to the trend of digital authoritarianism so that other governments will have an alternative. Here, potential disagreements between the EU and United States concerning AI have to be resolved. It is well-known that both actors have different notions about the degree of AI regulation since the latter prefers a more ‘laissez-faire’ approach. However, there are indications that especially for AI surveillance, the disagreements are not too great. For instance, the recently adopted Statement on Artificial Intelligence and Human Rights from the Freedom Online Coalition (FOC), of which the United States and several EU states are members, notes the importance of the preservation of human rights in light of AI developments.
In general, international collaboration on restricting the areas for AI surveillance is therefore critical, and the increase of AI surveillance tools in the midst of the pandemic is a unique opportunity to further drive the conversation in international fora. Public leaders might begin this discussion by building on the already existing work of other organizations and countries—for example the “Recommendation of the Council on Artificial Intelligence” by the Organization for Economic Co-operation and Development (OECD). Other blueprints or references are the EU’s white paper on AI and the UN Roadmap for Digital Cooperation which foresees the “multi-stakeholder efforts on global AI cooperation […] and use of AI in a manner that is trustworthy, human rights-based, safe and sustainable, and promotes peace.”
According to the study of the German think-tank Stiftung Neue Verantwortung, however, the current situation is rather characterized by a “complex web of stakeholders that shape the international debate on AI ethics [and by] outputs that are often limited to non-binding, soft AI principles.” Besides expert groups on the EU level, there is, for example, the Ad Hoc Committee on Artificial Intelligence (CAHAI) of the Council of Europe in the wider European context, the two expert groups at OECD level (comprising European states and the United States), and the Ad Hoc Expert Group for the Recommendation on the Ethics of AI at the UNESCO. In none of the mentioned fora, are all three actors—the EU, the United States, and China—common members. Therefore, finding the right way for a dialogue and developing consensus on these issues will be an enormous challenge due to the different approaches among the countries and the emerging triad of digital autocrats, fragile democracies or ‘digital deciders’, and liberal democrats.
However, with signals from from the US administration pointing to more engagement in multilateralism, the EU and other like-minded states should at least for themselves agree on the basic principles surrounding AI surveillance in order to convince others to adhere to their notions. Regarding the mentioned international fora, the OECD seems to be the best suited one due to its current work regarding its AI principles and observatory as well as the membership situation. Furthermore, similar to cyber consultations in which states are exchanging views on opportunities and threats in the context of cyber space, ‘strategic AI consultations’ between foreign ministries of like-minded states can help to better grapple with the challenge of AI surveillance.
Even though the international debate on AI surveillance is occupied with the dangers it poses to human rights, there are positive examples worth mentioning. AI surveillance tools are used for taming wildfires, and AI recognition tools developed by Microsoft have been repeatedly used to detect and protect endangered species. Another field is the use of AI for medical applications. However, these positive examples have to be expanded since the scale of their impact is limited compared to digital authoritarian implementations, including the envisaged public mass surveillance of 1.5 billion people by the Chinese government. In a common effort, several ‘tech for good’ areas can be jointly developed, which would also help mature and ‘enshrine’ an alternative to ‘authoritarian tech.’
Concerning AI and data privacy, state governments, private companies, and NGOs should further develop collaboration, standards, and international awareness for privacy-enhancing technologies (PETs). According to the European Union Agency for Network and Information Security (ENISA), PETs refer to “software and hardware solutions, i.e. systems encompassing technical processes, methods or knowledge to achieve specific privacy or data protection functionality or to protect against risks of privacy of an individual or a group of natural persons.” The deployment of PETs, if well-conceived and properly executed, could strike a balance between using AI surveillance for effective crisis management on the one hand and protecting privacy on the other. This privacy-by-design approach has been repeatedly called for by Margrethe Vestager, the Executive Vice-President of the European Commission for a Europe fit for the Digital Age (Competition), and is one of the suggested principles outlined in the EU GDPR. In the United States, Senator Kirsten Gillibrand’s recently envisaged Data Protection Act contains the same themes. Similar to PETs, standards for these purposes could ease many data protection concerns (even though this will have limited effect for facial recognition).
Third, an approach towards China and other troubling users of AI surveillance tech has to be nuanced: cooperate where possible, but impose restrictive measures such as sanctions if needed.
The discussion in international fora should include China, which has its own notion of AI ethics and regulation (Beijing AI Principles), as well as other authoritarian states. Besides international collaboration, however, governments should further scrutinize companies exporting AI surveillance tools used in human rights abuses. In light of the US-Chinese tech confrontation, companies such as Huawei and ZTE and their supply of surveillance technology as an instrument of political repression have provoked criticism from governments and civil society groups alike. In light of the growing interrelationship between technological advances and possibilities for political repression, public leaders should consider international sanctions and clearly state that punitive actions will be imposed on companies and states in response to human rights abuses, and not for reasons of economic and military competition.
Unequivocally, the private sector must be part of the debate as the endeavor requires a multi-stakeholder approach. Companies should contribute their expertise in developing and handling these technologies and clearly show the benefits and challenges in applying AI surveillance tools. In that context, private leaders can commit themselves to supplying technology only for lawful use, as has already happened to some degree with American tech leaders. In that context, IBM has recently even called on the US Department of Commerce to develop new export rules about “the type of facial recognition system most likely to be used in mass surveillance systems, racial profiling or other human rights violations.”
For like-minded Western countries, finding areas of cooperation with authoritarian states, especially China, will be of great importance. At the same time, however, certain practices that run contrary to the values of human rights and rule of law must be clearly addressed. With the US administration at the helm, a more cooperative spirit in the transatlantic relationship and beyond is expected. Further, the preservation and promotion of democratic values will most probably receive more attention. AI surveillance must come to the fore in the dispute between democracy and authoritarianism. Otherwise, the dystopian scenario proposed at the beginning of this article is only a matter of time.
Also:
Artificial Intelligence and Facial Recognition: Privacy, Ethics and Regulation
This article looks at the growth of facial recognition systems in the UK, the risks of artificial intelligence surveillance, the benefits of healthcare applications and the need for regulation to balance innovation with civil liberties.
Artificial intelligence is changing how societies work, from transport to healthcare, but nowhere is its impact more visible - or more contentious - than in AI facial recognition. AI facial recognition technology, which uses algorithms to identify and verify individuals by analysing their facial features, is being rolled out across public and private life. From policing Britain’s high streets to diagnosing medical conditions, its uses are multiplying fast. But the ethical, legal and social implications are still unclear.
The UK at the Crossroads of AI Surveillance
In September 2024 the UK became one of the first signatories to a landmark European treaty on artificial intelligence, designed to protect democracy, human rights and the rule of law. The move put Britain at the forefront of responsible AI. But at home a different story is unfolding.
The government has announced plans to expand the use of facial recognition systems in the UK, particularly in policing. Ministers say the technology can prevent violent disorder, find missing people and identify suspects. But critics point out there is no comprehensive law to govern such use. Instead, there is a patchwork of non-statutory guidance and regulation, which leaves big gaps in accountability.
Civil liberties groups have gone further and say Britain is on course to have one of the most surveillance states among democracies. This tension - between being a global leader in AI ethics and embedding AI surveillance at home - is the problem governments face in the digital age.
A Patchwork of Oversight
Currently AI governance in the UK is based on a principles-based approach. Regulators like the Information Commissioner’s Office (ICO) oversee some data use but there is no law specifically for artificial intelligence in face recognition. This fragmented system allows police forces and some local authorities to roll out the technology with minimal oversight.
Figures from the Metropolitan Police in London show that in 2023 over 360,000 faces were scanned during live deployments. Similar pilots have taken place in Essex, North Wales and Hampshire. Supporters say such initiatives help identify suspects quickly. Detractors say they normalise blanket surveillance, often without public knowledge or consent.
Accuracy, Bias and Predictive Policing
The technical issues with AI facial recognition add another layer of complexity. Research in the UK has found significant disparities in accuracy, with higher false positives for black individuals compared to white or Asian counterparts. This mirrors international studies that show racial and gender bias in algorithmic systems.
Beyond identification, some police forces are experimenting with predictive analytics to flag people who may commit crimes. Critics say this is a move from investigating crimes to predicting them, which is an ethical minefield. A coalition of civil rights groups have called for these systems to be banned outright, saying they are incompatible with democratic freedoms.
Surveillance and the Private Sector
It’s not just the public sphere. Private companies in the UK are also experimenting with artificial intelligence surveillance, particularly in retail and workplace management. From monitoring employee attendance to tackling shoplifting, businesses see efficiency and security.
But several cases have shown the risks. In one instance an organisation was found to have illegally processed biometric data of thousands of staff. Elsewhere schools have trialled facial recognition for canteen payments without proper data protection assessments. The ICO has intervened but its powers are limited when global corporations are involved. International firms have challenged penalties and won, leaving regulators struggling to exert control.
This blurring of the lines between state and corporate surveillance is particularly worrying for the public. Surveys show over 50% of UK citizens are uncomfortable with biometric data being shared between police and businesses, yet it’s happening.
Healthcare Applications: Promise and Peril
While the public debate focuses on policing, artificial intelligence in face recognition is also entering healthcare. Hospitals and care homes are exploring its use to improve patient safety, streamline record-keeping and secure access to sensitive areas.
Researchers are also looking at diagnostic uses. Certain genetic conditions leave facial markers which AI tools can identify faster than traditional methods. Other systems aim to assess pain in patients who can’t communicate, such as those with dementia, by analysing subtle facial muscle movements.
The benefits are significant: faster diagnosis, better patient monitoring and more efficient care. But the same issues of bias, consent and privacy apply. Studies show facial recognition tools can be up to a third less accurate for darker-skinned women than for lighter-skinned men. In a healthcare setting that’s misdiagnosis or inadequate treatment.
Events, Security and Everyday Use
Another area is event management. Conferences, concerts and sports fixtures are starting to use AI facial recognition for ticketless entry, access control and personalised attendee experiences. Organisers say it’s efficient and safe: guests can check in without physical tickets and security teams can stop unauthorised entry in real time.
The technology also collects more data and gives insights into crowd flows and session popularity. But this expansion into everyday leisure activities raises more questions. How much surveillance is acceptable for convenience? And how should the data gathered at private events be protected and stored?
Privacy, Consent and Civil Liberties
At the heart of the issue is privacy. Faces are not just identifiers, they are biometric signatures that can’t be changed if compromised. Converting them into data creates new risks, from identity theft to tracking without consent.
One of the biggest problems is consent. Many deployments happen without explicit permission from those being scanned. Unlike clicking “accept” on a website, the public has no practical way to opt out when surveillance cameras with AI are in the streets, shops or schools.
Issues surrounding consent put the onus on policymakers to create clear safeguards so individuals can control their own data.
Towards a Regulatory Framework
Given the pace of facial recognition technology adoption, many argue that the UK needs a dedicated law for facial recognition systems. This could include:
Transparency requirements to tell the public when facial recognition is being used.
Consent standards for opt-in models for non-policing uses, like retail or healthcare.
Bias audits to test and certify systems for fairness across demographic groups.
Independent oversight to create a statutory regulator to monitor across sectors.
International cooperation to align rules with global standards to prevent loopholes for multinational companies.
Without these, the UK will undermine the very rights its leaders signed up to on the world stage.
The Future of AI Facial Recognition
There’s no denying the potential of AI facial recognition. In healthcare it may revolutionise diagnosis; in events it may redefine convenience; in policing it may reshape public safety. But technology doesn’t exist in a vacuum. Its use reflects our values, priorities and choices.
Britain has a choice. To have the benefits without the erosion of civil liberties. Policymakers must act fast to make sure innovation is matched by accountability. As the tech becomes more mainstream the debate on privacy, ethics and regulation will only get louder.
Conclusion
AI facial recognition has big questions for the UK and the world. It offers tools to improve health, security and efficiency but it also deepens inequality, normalises surveillance and erodes democratic rights.
The task ahead is not to stop innovation but to control it. Regulation, transparency and public conversation are key to making sure technology that recognises us doesn’t end up stripping away the rights that make us.
AI Security Camera and Technology
On a broad scale, artificial intelligence (AI) is a term that refers to machines, software, or technology capable of performing tasks that would typically require human intelligence. These tasks often include demands for learning, reasoning, perception, and problem-solving. Unlike conventional technologies, an AI security camera can process and analyze data with exceptional accuracy, and learn from experience.
Experts predict by 2029 that AI security solutions will reach a value of $60.24 billion . In the commercial security industry, AI powered surveillance can support a range of functions and uses such as optimizing video surveillance (with AI features ).
While there are various types of AI security solutions, most include similar components including:
Machine learning algorithms: Technology which enables a computer system to learn from data, improve its performance over time, and make predictive decisions.
Smart motion detection: Smart intrusion detection and live remote footage can reduce false alarms and increase on-time detection.
Video analytics: Analytical capabilities in an AI system allow the technology to process and understand data, often to predict future events, or provide insights into risks.
Data processing: AI technology can process vast amounts of data much faster than human operators. It can even analyze images, like a person’s face, or examine trends.
How AI Improves Surveillance and Security
Artificial intelligence is transforming how businesses monitor and protect their facilities. AI in surveillance systems brings automation, accuracy, and speed to every layer of security.
Key advantages include:
Automated threat detection: AI identifies unusual motion, faces, or license plates instantly.
Real-time alerts: The system notifies security personnel or triggers alarms without delay.
Fewer false positives: Algorithms learn normal activity patterns and reduce unnecessary alerts.
Smarter analytics: AI provides valuable data on traffic flow, employee movement, and safety compliance.
Remote access: Teams can view live footage and review events from any connected device.
Infassure installs AI-powered surveillance systems that detect incidents in real time, streamline investigations, and keep businesses protected around the clock. Artificial intelligence can help your current security team and traditional surveillance systems.
Applications of AI Surveillance Cameras in Commercial Security
In today’s world of digital transformation, innovators are constantly discovering new applications and opportunities for artificial intelligence. AI technology enables operators and business leaders to respond to potential threats faster than ever, and implement strategies for proactive security.
Because these systems can grow more intelligent over time, thanks to machine learning algorithms, their value increases consistently. What’s more, artificial intelligence solutions are highly versatile. They can be incorporated into all kinds of security solutions, from access control systems to alarm systems, CCTV security cameras, and fire detection systems.
Just some of the most exciting applications for AI security include:
Video Surveillance and Facial Recognition
In the video surveillance industry, the use of AI security camera systems allows companies and operators to analyze video footage accurately in real-time. Using computer vision technologies, deep learning algorithms, and machine learning, these solutions can spot unusual activities in an instant, allowing for real-time monitoring and threat analysis.
Biometric technologies also allow AI security cameras to swiftly identify individuals based on facial recognition components. These surveillance cameras can distinguish between objects and people, and even track the movements of certain people around an environment.
Access Control Systems
In access control systems, biometric technology powered by AI is one of the fastest-growing segments. In an access control system, AI technology can examine data in the form of fingerprints, iris scans, and facial characteristics to identify people with the right to enter a building.
This can allow for a more efficient and secure way to manage access control without the need for traditional keycards and passwords. AI can use biometrics to minimize disruption in fast-paced environments like healthcare and industrial industries. Plus, AI can be trained to adjust access controls based on factors like the time of day, or current threat levels.
Intrusion Detection and Prevention
AI powered security cameras can be highly adept at learning “normal” patterns of behavior in specific environments, using deep learning algorithms. They can be trained to learn and understand which actions are safe, and which are suspicious in a commercial environment. This means they can instantly identify administrators when unusual activities occur.
In some circumstances, AI systems can also use historical information to predict potential threats and alert individuals to the possibility of a risk occurring. For instance, in an industrial setting, AI powered cameras may show security personnel which times of day or which activities pose the highest level of risk.
Benefits of AI in Commercial Security
Used correctly, and configured to the specific needs of your organization, AI technology can deliver a host of benefits. The right AI security solutions can streamline and improve business operations, mitigate threats, and give businesses a crucial edge in the fight against numerous threats. They can:
Improve Video Accuracy and Efficiency
In any commercial environment, accuracy and efficiency are crucial to mitigating risk. An AI system can help to reduce things like false alarms, as they’re often more accurate in differentiating between simple anomalies and genuine threats. Public spaces are open to animals, people, and vehicles. Through advanced security camera technology, different types of movement can be determine object detection versus benign activity.
AI systems use different advanced features such as facial recognition, smart motion detection, and multi-view designs for full situational awareness. Using exceptional speed, companies are able to respond to an incident at a much faster rate or determine false alarms. Security employees can incorporate these multiple cameras (and systems) to spotlight a person of interest or escalating event day or night.
Enable Proactive Threat Prevention
Predictive analytical capabilities built into AI systems allow companies to move from a reactive strategy for security to a proactive approach. By analyzing historical and current data, AI systems can forecast potential risks and security breaches, and suggest preemptive measures.
The insights provided by AI can also help companies to deploy security resources more effectively. They can help teams to understand where additional precautions are necessary, and where to distribute members of security staff.
Optimize Security Video Scalability
As businesses and threats evolve, AI security systems implemented into your commercial office or business environment can help you scale your strategy to suit changing needs. The insights offered by these video tools allow businesses to use their available resources more efficiently, reducing costs.
Likewise, the flexibility of AI security systems means they can evolve and adapt to suit the evolutions in your business. They can be scaled and transformed to address different security needs in various industries and environments.
Challenges and Considerations with AI in Security
Though the potential benefits of AI security systems are phenomenal, there are always risks to consider when using artificial intelligence. For instance, the use of AI in video surveillance and biometric systems raises potential ethical issues surrounding privacy and data protection. Companies need to ensure they’re securing the sensitive data delivered to these systems in a compliant manner.
Furthermore, like any technology, AI software isn’t immune to risk. They can be susceptible to cyberattacks, just like other technologies. Plus, there are concerns about the inherent biases that can be built into AI algorithms, which could lead to problematic security practices. Some systems can even adopt biases over time, based on the information they receive.
To ensure you’re making the most out of your AI security system, it’s important to ensure you’re installing and configuring the right solution for your organization. Additionally, it’s crucial to ensure you have the resources available to consistently monitor the performance of your technology.
Tracking performance can help you identify potential risks and inconsistencies before they have a negative impact on your security processes.
The Future of AI in Commercial Security
As artificial intelligence solutions grow increasingly sophisticated, with the rise of more advanced algorithms, large language models, and new applications, demand for these tools will continue to grow. For instance, studies show that around 53% of companies are already using AI for IoT-based security, connecting algorithms with devices and sensors throughout a commercial enterprise.
AI systems are also becoming more adept at analyzing biometrics with computer vision, which means future devices will be more effective at tracking and recognizing different individuals. As we move into the future of artificial intelligence, the tools used for AI security will grow more sophisticated.
Already, we’re beginning to see the potential of solutions like autonomous response systems, which can mitigate threats without human input. Plus, the flexibility of AI algorithms is allowing companies to build more specialized systems for distinct requirements. Other trends include:
AI in Industry 4.0: The use of artificial intelligence in industrial environments, to enhance autonomous vehicles, quantum computing, the internet of things, and energy storage.
Sophisticated access control: Enhanced biometrics and real-time monitoring capabilities in access control, to minimize disruptions and improve security standards.
Autonomous robots: Robotic systems capable of enabling automatic surveillance and instant response mechanisms for potential threats.
Artificial intelligence is set to revolutionize the commercial security industry, providing companies with a more comprehensive, proactive, and intelligent forms of protection.
Staying up to date with the evolving AI landscape allows one to have the most comprehensive and robust security system, to address modern threats.
Transforming Security with AI
The security, technology, and business sectors will be constantly evolving. An Artificial Intelligence security system will defend commercial environments against a range of sophisticated threats. Going forward, AI will revolutionize how companies address their security needs, from surveillance monitoring to access control.
A new report from The Carnegie Endowment for International Peace finds that at least 75 countries are using facial recognition and other forms of artificial intelligence in order to surveil massive numbers of people.
A growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens. Carnegie’s new index explores how different countries are going about this.
Artificial intelligence (AI) technology is rapidly proliferating around the world. Startling developments keep emerging, from the onset of deepfake videos that blur the line between truth and falsehood, to advanced algorithms that can beat the best players in the world in multiplayer poker. Businesses harness AI capabilities to improve analytic processing; city officials tap AI to monitor traffic congestion and oversee smart energy metering. Yet a growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.
In order to appropriately address the effects of this technology, it is important to first understand where these tools are being deployed and how they are being used. Unfortunately, such information is scarce. To provide greater clarity, this paper presents an AI Global Surveillance (AIGS) Index—representing one of the first research efforts of its kind. The index compiles empirical data on AI surveillance use for 176 countries around the world. It does not distinguish between legitimate and unlawful uses of AI surveillance. Rather, the purpose of the research is to show how new surveillance capabilities are transforming the ability of governments to monitor and track individuals or systems. It specifically asks:
Which countries are adopting AI surveillance technology?
What specific types of AI surveillance are governments deploying?
Which countries and companies are supplying this technology?
Key Findings
AI surveillance technology is spreading at a faster rate to a wider range of countries than experts have commonly understood. At least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes. This includes: smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries).
China is a major driver of AI surveillance worldwide. Technology linked to Chinese companies—particularly Huawei, Hikvision, Dahua, and ZTE—supply AI surveillance technology in sixty-three countries, thirty-six of which have signed onto China’s Belt and Road Initiative (BRI). Huawei alone is responsible for providing AI surveillance technology to at least fifty countries worldwide. No other company comes close. The next largest non-Chinese supplier of AI surveillance tech is Japan’s NEC Corporation (fourteen countries).
Chinese product pitches are often accompanied by soft loans to encourage governments to purchase their equipment. These tactics are particularly relevant in countries like Kenya, Laos, Mongolia, Uganda, and Uzbekistan—which otherwise might not access this technology. This raises troubling questions about the extent to which the Chinese government is subsidizing the purchase of advanced repressive technology.
But China is not the only country supplying advanced surveillance tech worldwide. U.S. companies are also active in this space. AI surveillance technology supplied by U.S. firms is present in thirty-two countries. The most significant U.S. companies are IBM (eleven countries), Palantir (nine countries), and Cisco (six countries). Other companies based in liberal democracies—France, Germany, Israel, Japan—are also playing important roles in proliferating this technology. Democracies are not taking adequate steps to monitor and control the spread of sophisticated technologies linked to a range of violations.
Liberal democracies are major users of AI surveillance. The index shows that 51 percent of advanced democracies deploy AI surveillance systems. In contrast, 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology. Governments in full democracies are deploying a range of surveillance technology, from safe city platforms to facial recognition cameras. This does not inevitably mean that democracies are abusing these systems. The most important factor determining whether governments will deploy this technology for repressive purposes is the quality of their governance.
Governments in autocratic and semi-autocratic countries are more prone to abuse AI surveillance than governments in liberal democracies. Some autocratic governments—for example, China, Russia, Saudi Arabia—are exploiting AI technology for mass surveillance purposes. Other governments with dismal human rights records are exploiting AI surveillance in more limited ways to reinforce repression. Yet all political contexts run the risk of unlawfully exploiting AI surveillance technology to obtain certain political objectives.
There is a strong relationship between a country’s military expenditures and a government’s use of AI surveillance systems: forty of the world’s top fifty military spending countries (based on cumulative military expenditures) also use AI surveillance technology.
The “Freedom on the Net 2018” report identified eighteen countries out of sixty-five that had accessed AI surveillance technology developed by Chinese companies. The AIGS Index shows that the number of those countries accessing Chinese AI surveillance technology has risen to forty-seven out of sixty-five countries in 2019.
Notes
The AIGS Index presents a country-by-country snapshot of AI tech surveillance with the majority of sources falling between 2017 and 2019. Given the opacity of government surveillance use, it is nearly impossible to pin down by specific year which AI platforms or systems are currently in use.
The AIGS Index uses the same list of independent states included in the Varieties of Democracy (V-Dem) project with two exceptions, totaling 176. The V-Dem country list includes all independent polities worldwide but excludes microstates with populations below 250,000.
The AIGS Index does not present a complete list of AI surveillance companies operating in particular countries. The paper uses open source reporting and content analysis to derive its findings. Accordingly, there are certain built-in limitations. Some companies, such as Huawei, may have an incentive to highlight new capabilities in this field. Other companies have opted to downplay their association with surveillance technology and have purposely kept documents out of the public domain.
A full version of the index can be accessed online here.
Also:
Also:
Artificial Intelligence used for mass surveillance in 75 countries
.jpg)