Digital democracy Ukraine
Share

AI and Human Rights: Surveillance State vs Innovation

With the latest efforts in the EU to create a regulatory framework that would balance the risks and benefits of AI, the discussion on how to protect citizens’ fundamental freedoms and rights while maintaining leadership in technological innovation has intensified. Some argue that protections against privacy infringements, discrimination, and biases within different AI applications may stifle innovation in a sphere that has traditionally seen little regulation. 
 
This is especially when it comes to competing against states like China, where economic interests and strive for technological dominance often prevail over human rights. With external aggression at its border, Ukraine needs to anchor fringe technologies for its national security. Ukraine’s vibrant civil society and human rights organizations can play a vital role to ensure human rights are not neglected in this process. 
 
With this in mind, DRI Ukraine partnered with the Committee on the Development of Artificial Intelligence under the Ministry of Digital Transformation of Ukraine to organize a roundtable discussion “AI and Human Rights: Surveillance State vs Innovation”. It was held on 5 August 2021 online, streamed both in English and in Ukrainian.  
 
Main thematic lines were: 

Is AI ethical enough? AI ethics-washing vs Human Rights by default; 
Balancing between regulation and innovation; 
Race against China’s technological dominance: human rights as an advantage rather than a holdback; 
AI independence for an emerging country; 
The role of human rights advocacy in addressing risks of AI. 
 
Speakers: 

Jesse Lehrke, Research Coordinator, Digital Democracy, Democracy Reporting International 
Vitaliy Goncharuk, Head of the Expert Committee on the Development of Artificial Intelligence of Ukraine 
Matt Sheehan, Fellow of the Paulson Institute’s think tank, MacroPolo 
Kateryna Mitieva, Media and Communication Officer, Amnesty International Ukraine 
Moderated by Uliana Poltavets, Project coordinator, DRI Ukraine 
 
Last year the government of Ukraine adopted the Concept on the Development of Artificial Intelligence until 2030, followed by the adoption of the Action plan a couple of months ago. The concept was developed by the Expert Committee on AI Development in wide consultation with the industry, experts, and civil society, and it made Ukraine one of only a few dozens of countries having national AI strategies. Prioritizing artificial intelligence at the national level means joining in the market that is expected to grow to 15 trillion dollars by 2030. It also means that we have a chance to become globally competitive not only in natural resource-intensive industries but also in sectors that dominate the global economy of the future. And finally, it has great implications for our national security as AI solutions are integrated at a fast pace into weapons, national intelligence, cybersecurity, etc. So, for a country like Ukraine, an emerging country, harnessing this breakthrough technology which is actually all around us every day in our mundane activities already is not just important; it’s a matter of survival. With the Russian aggression at the border, with our economy being dependent on exports of natural resources and imports of products of intelligent work, and even with our best talents leaving the country not just in search of a better life, but because they cannot fulfill their potential here, it is not just a desire to innovate; it is our obligation. 
 
Vitaliy Goncharuk, Head of the Committee on AI Development, spoke on AI independence for an emerging country. He started by laying out the geopolitical competition in AI with 5 main centers – the USA, China, India, the EU, Russian Federation. These countries have complex relations with each other (the US vs China, EU allies with the US). They work on different tech platforms and AI technologies: social media (WeChat vs Facebook), defense tech and products (tanks, missiles, drones, etc.), decision-supporting systems (Palantir vs Mininglamp), infrastructure (smart cities, IoT, etc.). With these platforms, these countries collect data. In addition to competition for data, these countries compete for human capital (students, tech professionals), and in regulation (whether to regulate, what to regulate, or not to regulate). Critical areas for AI independence are natural language processing, defense and surveillance technologies, education in AI, data collection, reasonable regulation, locally-based successful AI businesses, and social media (deep fakes control, etc.).  
 
For an emerging country, the strategy to approach AI-based tech from other countries would be to define very critical technology and have developed in-house (government-owned companies), less critical tech – have local R&D and certification of foreign technology), and other tech – license tech from other countries.  
 
In the future, Vitaliy believes governments will start to control local data and require to open source all algorithms created by the global players using local/national data. They will also invest in AI certification and testing systems and standardization. Some critical areas probably will be deregulated. For emerging countries to be independent in AI, more in-house AI development will take place, especially in defense and other sensitive areas.  
 
Answering the question about metaverse, a concept that describes an immersive digital world, with Facebook recently announcing its development, Vitaliy spoke about the human rights of “virtual” people. First of all, the impact of metaverse will come no sooner than 5-10 years. Metaverse will probably offer more freedoms to humans, with us trying to replicate real-world standards. However, real issues like privacy or children’s rights have to be thought out now, at the stage of design.  
 
Talking about governance and the changes of its fabric due to emerging technologies, Vitaliy spoke about several levels of decision making. At the operational level, decision making will definitely increase and there will be less need for clerks doing routine work. Governments with old systems, i.e., using humans as a key force, will be less competitive against governments that use AI, and it is not just about geopolitics, but also about cost efficiency and budgets. At the “power” level of governance, less will depend on a single human (politician) and more on the architecture of the system and the ability of the political team to think abstractly.  
 
The topic of China has been in the news in Ukraine for some weeks now, due to Ukraine’s sudden change of foreign policy regarding Uighurs, a Muslim ethnic minority in China. But what hasn’t been in the news so much is all the extraordinary security measures that are used by the Chinese government. Elaborate surveillance systems, identification scanners, spyware, all powered by AI. Basically, it is turning the region into a testing facility for social controls. And this is on top of many other stories of some forms of AI used by the Chinese government for monitoring their own citizens. Nobody really knows the actual numbers, but it is estimated that in 2020, China could have spent up to $70 billion on AI, this number was shared by a top US Air Force general, and this is not only in total research but specifically in military AI. They obviously have access to a gigantic amount of data that they collect from their citizens. China’s global share of research papers in the field of AI surpasses any other country in the world. China also consistently files more AI patents than any other country. So, clearly, what China is sacrificing in human rights, it wants to balance out by innovation and becoming a global leader in AI and in general.  
 
Matt Sheehan, Fellow of the Paulson Institute’s think tank, MacroPolo, started by describing what gives China an edge in AI – its huge population, its access to data, its complete regulation, constant surveillance. However, data alone does not guarantee success, and it is much more multi-faceted and complicated than other similar resources, such as oil. Chinese companies have a lot of data but it is mostly based on the Chinese population, whereas American companies like Google operate globally and more holistically. The field of AI is shifting from having data intensity to compute intensity. Human rights in AI, however, is a matter of the lens through which you are looking: it is unreasonable to expect governments, especially in countries like China, to make changes in their countries alone. What is feasible though is the internationalization of such efforts, since AI transcends country borders. People have increasingly more distrust towards AI-based companies, and it is going to take a lot of time and effort to gain that trust, especially for China and Chinese companies. And, that is when having some controls for human rights, pushing no agenda or propaganda, advancing some privacy oversight becomes an advantage, rather than a drawback, for these companies. But foreign governments have leverage for Chinese companies too – they can ban their products or impose sanctions, thus making companies lose huge markets. When speaking about emerging countries competing for talent, Matt said that it is not merely an issue of money but also whether these experts are able to do cutting-edge research at home rather than in the US. And that is where governments can step in, like China did, to help establish research centers and labs that attract talent as well target star talents that will attract big teams with them. 
 
The next topic on the agenda was the new regulation put forward by the European Union. The so-called Artificial Intelligence Act, en route to becoming somewhat of an industry disruptor like GDPR was at its time, has been in the works for a few years, and was recently put up for consideration. This extensive framework lays out a multi-tier structure of risks that different AI systems pose and thus require different levels of regulation, even until the outright ban of some. By doing this, the EU hopes to become the focal point for ethical AI, for rules and norms that would govern this area. And of course, this directly contradicts the philosophy upheld by many in the industry, both here in Ukraine and also, of course, in the Silicon Valley – that emerging technology should be left alone unregulated. And this goes also in parallel to the attempts by the so-called big tech companies – Google, Facebook, Apple, Amazon, and many others – to come up with some ethical principles that would be applied to their technologies. It is a common view among experts that many of these efforts come as a publicity response to scandals, privacy crises, infringements on democratic discourse and processes like elections, caused by these technologies, as well as an advance maneuver, an attempt to self-regulate rather than be regulated. 
 
Jesse Lehrke, Research Coordinator, Digital Democracy, Democracy Reporting International, assessed how we can move away from the superficial level of ethics to human rights by design approach. The key to this is the alignment problem – how to ensure that AI values and goals align with human values and goals. However, AI was not built this way from the beginning. AI systems were built on certain inputs of data, and when you as a human do not fit into that template created by the system, you end up with wrong predictions/recommendations and wrong classification by AI. This is not a problem if it is some product by a company, like Facebook, but if we extend this logic to governments and other important decision makers using AI, it becomes an issue of human rights. And due to the design, it replicates and enhances the existing system, including the inequalities that it entails.  
 
The companies have started to add human rights to their systems, acknowledging these drawbacks. The first layer of this activity comes in statements and principles and then ad-hoc add-ons. However, there are two ways to add ethics to AI that are better: by default and by design. The default way means that the system would always decide in favor of the human by default and then use other tools for oversight. However, this has limited scope. The other approach, human rights by design, is more sustainable long-term but takes more time and effort. Human rights by default can be compared to the do no harm philosophy, while human rights by design is more about realizing and expanding human potential and increasing trust. In the end, companies end up with better data, even if new privacy regulations do not immediately provide them with access to data. 
 
There have been a lot of reports recently, some by Amnesty International, on human rights activists being targeted using digital tools; we hear these reports of surveillance from China, from Belarus, even from some Western countries; there are alarming development of AI-driven weapons. So, it is clear that there are threats from AI for human rights. But there are also many benefits and advantages of leveraging AI for human rights activities: better and more equal access to services, less corruption, the blindness of the algorithm when it comes to different groups of people. However, when we actually see the practice of the use of these technologies, many of the human biases translate into those and are magnified by the technology. It seems that human rights organizations could have a huge role to play when advocating for that and also during the implementation and oversight of these policies.   
 
Kateryna Mitieva, Media and Communication Officer, Amnesty International Ukraine, talked about risks related to AI and human rights. National governments cannot keep up with the pace of technological progress, and thus regulations and legislation are often not relevant. The right that gets abused the most is the right to privacy, both by tech giants and by governments. Amnesty International researches tech giants such as Google and Facebook whose business models rely on the use of surveillance technology for commercial benefits. People get classified in these algorithms – by race, ethnicity, gender, sexual orientation, etc. – features that are usually protected by law in real life, but in the tech world, often subject to commercial transactions, where massive amounts of data can be sold to third parties or even used to influence political processes (the infamous case of Cambridge Analytica, whose data was also used in a political campaign in Ukraine) and manipulate social discourse. The governments should look into regulating the collection of data by third parties; however, tech giants should also try to refocus their business model. And the biggest responsibility lies on users – a lot of effort has to go into informing and educating them to better dispose of their privacy rights.  
 
When it comes to face recognition which is widespread in the US and UK and is used against Uighurs in China, it is worth noting that data is collected without the person’s consent which is a direct violation of human rights. The additional risk here is that this data can be used by malicious actors or by the government to spy against citizens or as a tool of oppression or repression. Face recognition at the moment is also discriminative – white males are better recognized than other ethnicities or genders. Another current danger of face recognition is its use to identify protesters, which is what is said to have happened in Belarus and elsewhere. Ukrainian legislation is not adapted yet to address all these challenges, and any attempts to regulate should be done with close involvement of and in consultation with human rights organizations. 

Follow the link to watch the discussion: bit.ly/38pp2Cw

This work is supported by