If you move in defense technology circles, than the name Paul Scharre is one you’ll recognize. His first book, Army of None: Autonomous Weapons and the Future of War, became a must-read about autonomous military systems, and his work with the Center for a New American Security has kept those themes front and center. Breaking Defense is happy to bring our readers an excerpt from his new book, Four Battlegrounds: Power in the Age of Artificial Intelligence, released today.
China is not just forging a new model of digital authoritarianism but is actively exporting it. In 2018, Zimbabwe signed a strategic partnership with the Chinese company CloudWalk to build a mass facial recognition system consisting of a national database and intelligent surveillance systems at airports, railways, and bus stations. The deal will help “spearhead our AI revolution in Zimbabwe,” according to former Zimbabwean ambassador to China Christopher Mutsvangwa. “An ordinary Zimbabwean probably won’t believe that you can buy your groceries or pay your electricity bill by scanning your face,” Mutsvangwa said, “but this is where technology is taking us and as the Government, we are happy because we are moving with the rest of the world.”
At stake in the deal, which is part of China’s Belt and Road Initiative, is more than just money. Zimbabwe agreed to let CloudWalk send data on millions of faces back to China, helping CloudWalk improve its facial recognition systems with darker skin tones. As CloudWalk CEO Yao Zhiqiang pointed out, “The differences between technologies tailored to an Asian face and those to an African face are relatively large, not only in terms of colour, but also facial bones and features.” Training on African faces could help make CloudWalk’s facial recognition algorithms more robust—and more marketable around the globe. Data is the real currency of power in the age of AI.
The Zimbabwe deal is part of the massive global proliferation of Chinese surveillance technology along with Chinese-style laws and policies. According to University of Texas professor Sheena Greitens, Chinese surveillance and policing technology is now in use in at least eighty countries around globe and on every continent except Australia and Antarctica. Greitens has described China as the “index case” for a new set of technologies, laws, and policies spreading around the globe. Kara Frederick of the Heritage Foundation has described the suite of AI technologies as “the autocrat’s new toolkit” which are empowering authoritarians and endangering freedom. Left unchecked, the spread of China’s model of AI-enabled repression poses a profound challenge to global freedoms and individual liberty.
Chinese firms have been aggressively selling their surveillance technology worldwide. In Malaysia, the Chinese firm Yitu has sold facial recognition bodycams to police. In Ecuador, China’s state-owned China National Electronics Import and Export Corporation (CEIEC) and Huawei, a Chinese telecommunications firm, built a national surveillance network of over 4,000 cameras feeding data to the police and the country’s domestic intelligence service. Following Ecuador’s lead, Venezuela adopted a similar system, with the goal of fielding 30,000 cameras. Brazilian lawmakers, wowed after a trip to China, announced in 2019 that they would pursue facial recognition cameras linked to police in airports, train stations, subways, and high pedestrian traffic areas. Facial recognition was reportedly used in Brazil’s Carnival festival in 2019, and by 2020 was deployed to police in six cities across the country. According to news reports, São Paulo’s police database includes 32 million faces. Singapore has been soliciting bids from Chinese companies for a 110,000-camera network atop public lampposts using facial recognition. The project, called “Lamppost-as-a-Platform,” aims to use AI-enabled cameras to “perform crowd analytics” and “count and analyse crowd build-ups,” according to Singapore’s government. In Angola, Percent Corporation, a Chinese data intelligence company, built the government an intelligent data analysis and visualization system that includes biometric information, such as facial images and fingerprints, along with birth, education, and marriage records. Percent Corporation’s website boasts that its “data processing, analysis, and decision-making” solutions have been applied to “more than 20 countries across Asia, Africa, and Latin America.” Other Chinese firms selling surveillance cameras abroad include Hikvision and ZTE, and other countries using Chinese surveillance technology include Bolivia, Germany, Pakistan, the United Arab Emirates, Uzbekistan, and Venezuela.
Huawei, one of China’s tech “national champions” and the world’s largest provider of telecom network equipment, has been particularly active in selling “safe city” solutions around the globe. Huawei has deployed 2,000 cameras to Nairobi, Kenya, and is helping establish a surveillance system of 1,000 cameras equipped with facial recognition in Belgrade, Serbia. Huawei describes its “safe city solutions” as using “AI, cloud computing, big data, and IoT” to build “interconnected, intelligent, and collaborative safe cities.” Its “Command, Control, Communication, Cloud, Intelligence, Surveillance, and Reconnaissance [C4ISR]” solutions “enable advance prevention, precise resource allocation, efficient analysis, visualized command, and efficient coordination among multiple departments.” The goal, according to Huawei, is to “help governments reduce crime rates and prevent and respond to crises more effectively.” The products, as advertised, enable significant government surveillance. Huawei claims its “All-Cloud smart Video Cloud Solution . . . supports network-wide distributed smart analysis, and allows videos to be used not only for surveillance but also for generating actionable intelligence, taking city safety to a new level.” Huawei’s video surveillance technology can reportedly track people and cars, detect “abnormal behavior” such as loitering, automatically determine the size of crowds, and send authorities automated alerts.
According to Huawei, its safe city technology has been applied in “700 cities across more than 100 countries,” including Brazil, Chile, Colombia, Côte d’Ivoire, Kenya, Mexico, Pakistan, Saudi Arabia, Serbia, Singapore, South Africa, Spain, Thailand, and Turkey. While Huawei has not publicly provided a list of all 100 countries, news reports indicate additional countries with Huawei safe city projects include Angola, Azerbaijan, France, Germany, Italy, Kazakhstan, Laos, Russia, Turkey, Uganda, and Ukraine. According to Huawei, over 1 billion people are served by its safe city technology.
Researchers at the Carnegie Endowment for International Peace compiled an AI Global Surveillance Index, mapping AI surveillance technology around the globe, and found that “Chinese companies—led by Huawei—are leading suppliers of AI surveillance around the world.” Steven Feldstein, who led the AI Global Surveillance Index project, wrote:
Huawei is the leading vendor of advanced surveillance systems worldwide by a huge factor. Its technology is linked to more countries in the index than any other company. It is aggressively seeking new markets in regions like sub-Saharan Africa. Huawei is not only providing advanced equipment but also offering ongoing technological support to set up, operate, and manage these systems.
Huawei’s global activities have come under fire on a number of fronts. Huawei provided telecommunications equipment for the African Union’s new headquarters building in Ethiopia, which was financed by the Chinese government, and in 2018 the French paper Le Monde revealed that data was being secretly transferred from the building every night between midnight and 2 a.m. to servers in Shanghai. A subsequent sweep for bugs found hidden microphones under desks and in the walls. In a published statement, Huawei stated,
“Allegations about impropriety with our customer, the African Union (AU) are completely unsubstantiated, and we vehemently reject any such claims.” In 2019, the Wall Street Journal reported that Huawei technicians helped the governments of Uganda and Zambia spy on political opponents. In response to cybersecurity concerns, a number of countries have either outright prohibited or effectively banned Huawei from their future 5G wireless telecommunications networks, including Australia, the Czech Republic, Denmark, Estonia, India, Japan, Latvia, Poland, Romania, Sweden, the United Kingdom, and the United States.
More troubling than the spread of Chinese surveillance technology has been China’s export of its laws and policies for domestic surveillance. According to Freedom House, China has held training sessions and seminars with over thirty countries on cyberspace and information policy. Examples include a two-week “Seminar on Cyberspace Management” held in 2017 for officials from countries participating in China’s Belt and Road Initiative, a Chinese global infrastructure development initiative that includes more than sixty nations. In 2018, journalists and media officials from the Philippines visited China to learn about “new media development” and “socialist journalism with Chinese characteristics.” Similar Chinese media conferences have brought in representatives from Egypt, Jordan, Lebanon, Libya, Morocco, Saudi Arabia, Thailand, and the United Arab Emirates. At the government-run Baise Executive Leadership Academy in southern China, over 400 government officials from southeast Asian countries have been trained in “China’s governance and economic development model,” including how to “guide public opinion” online.
In Tanzania, Uganda, and Vietnam, restrictive media and cybersecurity laws closely followed Chinese engagement. Zimbabwe’s government, whose officials attended Chinese seminars, has pushed for a sweeping cybersecurity law that would strengthen surveillance and clamp down on internet freedoms. While the global spread of Chinese technology helps China gain access to new datasets as well as inroads for spying abroad, it is the social “software” of laws and policies that help China export its evolving model of high-tech authoritarianism.
The proliferation of Chinese-style state surveillance is due to a number of factors. First among these is the desire of Chinese companies to make money and the international demand for surveillance networks. Autocratic regimes looking to secure power at home may view China’s model of digital authoritarianism favorably, but they are not alone in desiring greater surveillance. According to Carnegie’s AI Global Surveillance Index, 51 percent of advanced democracies use AI surveillance systems. London is the third most heavily surveilled city in the world. After all, there are legitimate uses of surveillance technology for policing and public safety in every country. Nor are Chinese firms the only ones exporting surveillance technology worldwide. While Chinese companies, especially Huawei, lead the pack, U.S. firms IBM, Palantir, and Cisco have all sold AI surveillance technology to multiple countries.
The Chinese government has hardly adopted a hands-off approach to its companies’ overseas engagement, however. The government has helped put wind in the sails of its companies by offering loans to subsidize digital infrastructure projects, making them more affordable for developing nations. These efforts are not merely about making money but are part of China’s broader push for greater geopolitical influence under Xi Jinping’s leadership. A cornerstone of China’s international engagement is the Belt and Road Initiative, launched in 2013, which consists of investments in building ports, railways, highways, energy pipelines, and digital infrastructure projects across Asia, Africa, and Europe. The effort harkens back to the original Silk Road, the Eurasian trade routes that connected China to the rest of the world from 130 BCE to the fifteenth century. Today’s Belt and Road is similarly aimed at deepening China’s linkages with other economies, as well as extending Beijing’s political influence. Spending estimates vary, but China has likely spent hundreds of billions of dollars in overseas construction and investment. The Digital Silk Road is the digital tech component of Belt and Road, encompassing technologies such as AI, safe cities, cloud computing, 5G wireless networks, and other “smart city” initiatives to help build modern, connected urban areas. The motivation for these efforts is expanding China’s political and economic influence, and part of doing so is exporting China’s model of governance.
One of Beijing’s foreign policy goals is to secure Chinese Communist Party power by making “a world safe for autocracy,” according to Cornell professor Jessica Chen Weiss. This does not look like a Soviet-style campaign of fomenting communist revolutions around the world. Instead, on a variety of fronts, Beijing seeks to weaken existing institutions, laws, and norms of democracy and freedom. Weiss has written: “China’s actions abroad have . . . made the world safer for other authoritarian governments, and undermined liberal values. But those developments reflect less a grand strategic effort to undermine democracy and spread autocracy than the Chinese leadership’s desire to secure its position at home and abroad.” China’s export of surveillance technology helps to normalize its own model of techno-authoritarianism.
Too often, Beijing’s arguments for illiberal governance have met a receptive audience in autocrats or autocratic-leaning leaders who have similar goals. Since the mid-2000s, the world has been experiencing a “wave of autocratization,” with authoritarian leaders tightening their grip and democracies experiencing “democratic backsliding,” such as reducing checks on executive authority. These trends have been seen in countries as diverse as Brazil, Burundi, Hungary, India, Russia, Serbia, Turkey, and Venezuela. Part of this trend has been the rise of “digital dictators,” who use social media, censorship, surveillance, and other digital tools to control the media, repress the population, and spread regime propaganda. China is not foisting their model on an unwilling world. Many countries are all too happy to emulate China’s example of how to suppress freedoms and tighten control over their population.
AI is being used for security applications in democracies too. After landing at Dulles airport in Washington, DC, I had my face scanned by the U.S. Customs and Border Patrol (CBP) at the border to verify my identity. The difference isn’t the technology, per se, but how it is used. In democratic countries, the government’s use of AI surveillance technology is subject to the rule of law and checks and balances within the country’s political system. At the Dulles checkpoint, a TV screen explained to travelers how long CBP retains their personal information (twelve hours for U.S. citizens and fourteen days for certain foreign travelers). The CBP website lists all of this information openly and publicly online, including which border checkpoints use biometric identification. Most importantly, the laws governing CBP’s activities are written by elected representatives of the people.
Democracies have other checks on the government’s powers as well. Independent media outlets shine a light on government activity. I knew about the facial recognition system at Dulles long before I ever went through the checkpoint because I’d read about it in the Washington Post. And if concerned citizens think the government is abusing its powers, they can file suit and take the government to court, where their case will be heard by an independent judiciary. Some concerned citizens have done so. The United States has seen lawsuits and robust public debates about the use of facial recognition, especially by law enforcement. Police use of facial recognition has sparked a grassroots backlash in the United States, with multiple cities and states banning or limiting law enforcement use. The American Civil Liberties Union (ACLU) and other civil society groups have filed lawsuits against a slew of government agencies, including the FBI, DEA, CBP, ICE, TSA, the Department of Justice, and the Department of Homeland Security. Civil society groups have criticized facial recognition, along with other technologies that aid in corporate or government surveillance and erode personal privacy. Even tech company leaders have stepped forward to say government regulation is needed, with Microsoft and Amazon calling for government regulation on facial recognition.
Europe has taken a different route than the United States, emphasizing preemptively regulating technology. The European General Data Protection Regulation, which covers data privacy in the European Union and European Economic Area, is an example of Europe’s approach. The first regional regulatory regime for data privacy, the GDPR has become the de facto global standard that companies must comply with and other nations must at least consider when crafting their own standards. While the U.S. government has taken a much more laissez-faire attitude to tech regulation, with members of Congress dragging tech leaders like Facebook’s Mark Zuckerberg before Congress for a public browbeating but passing little in the way of substantive regulation, Europe has leaned into regulating technology. American business executives have decried Europe’s model, arguing too much regulation could “strangle business,” and it is certainly true that overly burdensome regulations could harm innovation. However, the GDPR has given Europe a first-mover advantage in establishing global norms for privacy regulation. When I met with Chinese lawyers debating the contours of a potential new consumer data privacy law for China, the GDPR was the default starting point for the conversation.
Scholar Anu Bradford has referred to Europe’s approach as a “race to the top” for regulatory standards, and Europe is aiming for the same approach with artificial intelligence. European bodies such as the European Commission have begun developing AI regulations to balance the many societal challenges AI brings in terms of safety, security, privacy, health, productivity, and worker and consumer protection.
While European and American sentiments may differ on the degree to which regulation is desirable, both have a common starting point: they are grounded in democratic processes that grant their approaches legitimacy. The same cannot be said of China or other authoritarian regimes in which the citizens do not get a vote. There are meaningful differences between how technology is used in different nations around the world, but not all approaches are created equal. In democratic nations, choices about how to use AI-enabled surveillance technology such as facial, voice, or gait recognition are made through a dynamic interplay between citizens, the government, the media, and civil society. This process is essential for all stakeholders to get a voice and to arrive at a regulatory approach that balances competing interests across society. Europeans and Americans may arrive at different answers to these questions, but both answers are legitimate if the processes that developed them are inclusive, transparent, and representative of society. In authoritarian regimes in which the media is censored, human rights activists are imprisoned, and citizens aren’t allowed to openly express their discontent with the government, the type of open debate that is needed about technology is silenced by the government before it can even begin. Citizens of China and other authoritarian regimes can’t appeal to elected representatives, sue the government, freely self-organize, get a fair hearing in an independent judiciary, or learn about government abuses from a free press.
The problem with AI isn’t just that the technology is being adopted differently in different countries. After all, autocratic regimes abuse simple technologies in ways big and small, from police batons to jail cells. The problem is that AI can supercharge repression itself, allowing the state to deploy vast intelligent surveillance networks to monitor and control the population at a scale and degree of precision that would be impossible with humans. AI-enabled control is not only repressive, but further entrenches the system of repression itself. AI risks locking in authoritarianism, making it harder for the people to rise up and defend their freedoms.
The spread of AI surveillance technologies has the potential to tilt the global balance between freedom and authoritarianism. AI is already being used in deeply troubling ways, and those uses are spreading. Democratic governments and societies need to push back against these illiberal uses of AI by working to establish global norms for lawful, appropriate, and ethical uses of AI technologies like facial recognition. One of the challenges in doing so is that there is not (yet) a democratic model for how facial recognition or other AI technologies ought to be employed. China has pioneered a model for using AI for repression, but democratic states don’t have a ready-made alternative for how AI should be used that protects individuals’ privacy and civil liberties. It’s easy to say that democracies should move faster and come up with an alternative approach, but one of the challenges in doing so is that the democratic process for grappling with new technologies is, by design, slow, messy, and chaotic. A fair and legitimate process for establishing laws and policies for facial recognition and other technologies is one that takes input from a variety of stakeholders and that involves give and take among different elements of society. It would be easy for a government to dictate by fiat how facial recognition technology should be used. The outcome may not be one that is beneficial to society, however. The messy process that is playing out in the United States, which involves grassroots movements, municipal and state-level policy, entrepreneurship, and lawsuits from organizations like the ACLU might be slower but is more likely to lead to policy outcomes in the long run that balance the interests of those across society. Good governance is not always quick governance.
In the meantime, the global spread of AI surveillance technologies continues, including in many countries that lack the institutional mechanisms for checks and balances that exist in democracies. The quicker that democracies can come together to develop a privacy-preserving model for how surveillance technology ought to be used, the sooner they can effectively push back against the growing wave of authoritarian uses of AI that threatens global freedoms.