Ethics and Governance in the Age of Artificial Intelligence

Technology has always been used for good or for harm, and it has fundamentally changed human relations by either extending or constraining both power and opportunity. Today, the discourse on widespread digitalization and the rise of artificial intelligence (AI) amplifies both of these ethical dimensions. On the upside, AI is celebrated as a new source of innovation, economic growth, and competitiveness, as well as for the productivity and efficiency improvements AI offers across all industries and sectors. Intelligent automation also promises to resolve some of the most urgent global challenges and achieve the Sustainable Development Goals. The potential economic and social benefits of AI innovations can be tremendous. For the majority, the rise of AI is already experienced as an increase in everyday convenience. On the downside, the rapid advancement and omnipotent nature of AI capabilities has been cautioned against and condemned as a source of unprecedented security and privacy risks as well as a source of severe social, economic, political, and international imbalances in the long term. In the past, the benefits of such dual-use or disruptive technologies eventually outweighed their harm, but this often took place only after a period of misuse and accidents that caused people and governments to demand further improvements and regulations. As AI won’t be an outcome of only human agency but will increasingly develop into an independent agent of autonomous decision-making itself, we cannot readily rely on those past experiences. However, over the next decades the main risk is not that AI itself will cause immediate harm and long-term imbalances, but our existing human relations, practices, and intentions and thus how AI will be applied is the primary cause and source of disruption. AI won’t be external to history but perpetuate and probably accelerate the current trajectory of humankind, and as history has entered a downward spiral and become more fragmented and unsustainable, the risk of experiencing more of the downside of AI is rather likely.

AI is developing fast and slow

With AI on the rise, coupled with other disruptive technologies such as 5G, the Internet of Things (IoT), robotics, quantum computing, and biosynthetics, our imaginary distance between science fiction and real science has shrunk considerably. AI already beats humans in difficult tasks like playing Chess, playing Go, playing complex strategy games, or conducting medical and legal diagnoses. Besides the intelligent automation of control systems, computer vision and language processing have received the most attention in recent years and vastly outperform certain forms of human perception and expression. Yet AI is still far away from mimicking human-level intelligence or even reaching superhuman intelligence, and it still needs to overcome engineering bottlenecks related to creative and social intelligence. Although certain algorithms can already generate a computational sense of self, today’s AI is not able to abstract from one situation and apply general concepts to new contexts and tasks. Nor can algorithms automatically change the methodology of learning itself. While the application of AI systems can be extremely efficient and scalable, training AI systems still takes a long time, is extremely costly, and is much more inefficient than how humans learn. From the perspective of collective intelligence, AI cannot build or compete against large and complex social organizations, which is the human ability that arguably distinguishes humankind from nature. In short, since the rise and collection of mass data, AI has advanced rapidly, but it will not advance rapidly enough to match the apologetic or dystopian fantasies of a post-humanist and post-evolutionist era anytime soon. In the meantime, however, we will face tremendous challenges. As Alan Turing, the founding father of AI, once put it: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

The changing landscape of cyber-physical threats

The level of risk attributed to AI is not a matter of optimism or pessimism but one of understanding how AI serves existing human behavior and how it can alter power relations. Even before AI reaches or exceeds human-level intelligence, the disruptions of AI will be twofold; they will be immediate and felt directly, and they will be structural and unfold over a longer period of time. Regarding the former, AI’s immediate risks relate to the existing landscape of cybersecurity threats, which will change tremendously due to the malicious use of AI. There has been a steep increase in traditional cybersecurity breaches and cybercrime incidences that mainly threaten individuals, businesses, and national infrastructures. The World Economic Forum ranks cybersecurity risks among the top 5 sources of severe global risks. In the first half of 2018 cyber-attacks compromised 4.5 billion records, which was almost twice the amount of records compromised during the entire year of 2017. These incidences are caused by individual criminals, organized crime groups, terrorists, and states or state-sponsored actors, and they primarily involve the disruption of digital and physical systems, theft, and cyber espionage. Cyberwarfare is a combination of all of these and also involves information and psychological operations to manipulate public opinion. Due to its scalability and efficiency as well as the increasing psychological distance between the attacker and the target, the malicious use of AI will lead to the expansion of existing cybersecurity threats, create entirely new forms of cyber-physical threats, and carry out attacks and crimes that are much more targeted by optimizing the tradeoff between effort and harm or gain. Due to such changing landscape of immediate threats and risks, cybersecurity (and more recently AI) has become a matter of national security and military priority. While the next generation of mobile networks, or 5G, enables to connect everything with everything and with everyone, at home, in the office, in factories, and in smart cities, AI provides automation for the purpose of efficiency and convenience. The combination of both technologies will tremendously expand the surface for cyber-physical threats and accidents. It will further complicate both the deterrence and attribution of cyber-attacks or other hacking exploits due to the increasing complexity and inter-connectedness of computer networks. It won’t be possible to prevent those threats, but it will only be possible to mitigate them. For many governments, it’s not a question if but when severe cybersecurity incidences will occur. The risk is independent of specific technology providers.

Economic imbalances

In addition to these immediate risks, there are longer-term structural risks associated with AI, which are more difficult to anticipate, but their impact will be even more widespread and pervasive. This is simply because technology is not external to us, developing independently of history. Instead, it is deeply interwoven with history, and the current trajectory of humankind shows little sign of escaping from today’s downward spiral of economic, societal, political, and international relations. Economically, mass labor displacement, underemployment, and de-skilling are likely outcomes of intelligent automation and augmentation. AI directly competes with human intelligence, which was spared from automation during previous industrial revolutions. AI will not just target knowledge work but continue automating the physical labor that escaped previous waves of rationalization. As a consequence, governments must prepare for profound structural changes. Widespread automation and aging societies will reduce the labor force and labor as a major source of tax revenue. In addition, market forces have already caused the concentration of data, AI technologies, and human talents. Research and development increasingly shifts from publicly-funded to privately-owned laboratories of large AI platform companies that are less willing to share their intellectual property for the social good. While the Internet initially lowered hurdles to setting up businesses, AI raises the bar again, which can lead to digital kleptocracies and AI mercantilism if the zero-marginal-cost economy remains unregulated. While rich countries will be able to afford a universal basic income for those who will not be able to re-skill, low- and middle-income countries won’t be able to do the same and risk becoming trapped in their stages of development. AI coupled with data — the “new oil” on which machine learning thrives — will disrupt the global division of labor. Countries that can’t catch up with advanced automation to improve their competitiveness will be left further behind. Labor, and especially cheap labor, won’t provide a sufficient comparative advantage in the future, and this will render previous development models obsolete. Income inequality has already reached alarming levels, not just between rich and poor countries but also among the rich countries. The United States has the most unequal wealth distribution among all OECD countries. While a small group of transhumanists will effort and enjoy the privileges of digital upgrading, the number of those who are left behind will likely increase and add to the feeding ground for social unrest, populism, and nationalism. Before societies are able to change the meaning of labor and find new sources for improving human dignity, automation will reinforce individualism, alienation, and loneliness, and it will threaten both physical and psychological well-being and social cohesion.

Political imbalances

State and political actors will make more use of AI technologies. While businesses employ AI to segment people even more precisely as consumers and compete for their attention, political and state actors do so to better understand citizens as persuadable voters, followers, or potential security threats. This can help make countries more secure and the political process more efficient if AI is used responsibly and balances between economic growth, social good, and national security. However, AI increases the structural risk of shifting the power balance between the state, the economy, and society by limiting the space for autonomy. Through AI-enabled mass surveillance, psychological operations, and the weaponization of information, states and political actors might seek to acquire a disproportionate amount of power or amplify populism. The two poles of this political risk scenario are totalitarianism and tyranny of the majority. In both cases, the struggle over power dominates the struggle over progress and threatens the pillars of modern states and governments — bureaucracy, rule of law, and accountability. While authoritarian states could slide into totalitarian regimes or “digital dictatorships” by exerting pervasive state control and repression of differences, democracies could witness the erosion of their institutions, the polarization of societies, and the disintegration of “public morality” and “manufacturing consent.” We can already witness the world sliding towards either pole of political imbalances with “liberal capitalism” developing into a “political capitalism” instead of a “multistakeholder capitalism.” AI is not the cause, but it is an increasingly weaponized tool used both within and beyond national boundaries to disrupt the political process of adversarial countries. The Edward Snowden and Cambridge Analytica affairs are the most known and disturbing cases of widespread cyber espionage, privacy violation, the manipulation of public opinion and interfering in the democratic process within the West. Conversely, the West frequently accuses Russia, China, North Korea, Iran, and Syria of state or state-sponsored cyber intrusions and attacks and of pervasive mass surveillance.

Disrupting international relations

A fierce global competition over AI supremacy is already raging risking to disrupt existing international relations. All of the leading economies have laid out or updated their AI national strategies with the goal of promoting the development of nascent AI capabilities and ecosystems and to be able to compete globally. The Russian president, Vladimir Putin, blatantly stated the strategic importance of AI in 2017 when he said, “Whoever becomes the leader in this sphere will become the ruler of the world.” Russia is not leading the AI race; currently, the United States leads the race, followed closely by China. The United States wants to maintain its “leadership in AI,” while China aspires to become the “primary center for AI innovation” by 2030. Europe also seeks to become the “world-leading region for cutting-edge AI,” but it is lacking behind the United States and China in its number of AI talents and businesses, filed patents, published research papers, and investments into the AI industry for research and development. All governments emphasize AI as source of growth and competitiveness. At the same time, AI is classified as a “dual-use” technology and is therefore subject to national security, export controls, and FDI screening mechanisms. Governments have hastily passed new regulations to mitigate cybersecurity risks, ensure privacy protection, and empower law enforcement. The new regulations also protect domestic markets under the banner of digital and data sovereignty. The head-to-head race has extended to national defense agencies who are preparing for a “hyperwar” and making “battlefield-ready AI” a priority. Most troubling of all is the development of lethal autonomous weapons (LAW). While the European Union is calling for a ban of “automated killing robots,” the United States, China, Russia, and other countries are all advancing or acquiring LAW capabilities. Compared to conventional weapons, cyber weapons are low-cost and more easily accessible, which will accelerate the diffusion of cyberwarfare and LAW capabilities. This will also empower otherwise weaker actors, thus tremendously increasing the risk of asymmetric conflicts. Due to the proliferation of cyber technologies and the ongoing rush by many states to obtain offensive cyber capabilities for potential use in conflict, the actual risk of international cyberconflict and cyberwarfare has increased significantly, that is using use digital technology by one country to disrupt vital digital systems of another country. Such proliferation of technologies also holds the risk of “friendly fire” and “second order consequences” because many cyber networks rely on some private sector infrastructure. For defense agencies, AI is not about good or harm, but about competition and conflict.

There are numerous international organizations dealing with cybersecurity and cybercrime, but the realm of AI-enabled cyberconflict lacks international treaties and attempts to build familiarity, mutual trust, and confidence, especially between rival powers. The discourse on cybersecurity and cybercrime prevention is globally divided, and conventional arms-control treaties are being ripped apart or put into question. In addition, the United States is trying everything it can to decouple its technology, research, and supply chains from that of China, and it is pushing Europe and its other allies and partners to do the same. As part of its “great power competition” strategy, the United States is doing this to confine China’s rise based on national security concerns, though it has failed to provide evidence of misconduct. The clash between the United States and China has been most obvious on the issue of 5G, and it has increasingly extended to other disruptive technologies such as AI, IoT, robotics, quantum computing, and biosynthetics.

The prospects for AI ethics and governance

While we cannot anticipate the outcome of the digital and AI revolution because history gives us little to no reference for what could be the final technological revolution, such sobering lists of immediate threats and longer-term structural imbalances and tensions have sparked an international debate about the ethics and governance of AI. In this debate, the term ethics is often used to summarize those legitimate concerns about these potential disruptions of AI. The debate about AI ethics and governance has resulted most notably in the definition of numerous AI principle frameworks worldwide, which have been primarily proposed by large Internet platforms and multinational corporations, as well as international and non-governmental organizations and governments.

Despite subtle but crucial differences in selecting and emphasizing certain ethical principles, the various principle frameworks commonly emphasize that future AI should be secure, safe, explainable, fair, and reliable, and they also emphasize that its benefits should be distributed across society. There seems to be an international consensus that AI should be developed and used for the greater good of humanity. It should be human-centric, responsible and trustworthy, and it should always retain human agency and oversight. Conversely, such positive framing primarily confirms that today’s ethics and governance are ill-equipped to prevent or sufficiently mitigate the disruptive forces of AI and that those potential forces are clearly of global and historical proportions. However, almost all frameworks analyze the risk of AI in a narrow sense: that is, without developing a link between the dual-use character of the technology and the actual state of social, political, economic, and international affairs. Those frameworks ignore how AI will most likely reinforce rather than alter the current trajectory of history as indicated above. AI will increasingly make autonomous decisions, but it won’t escape and be completely autonomous from human practices anytime soon and we cannot expect it to become a transcendent, super-beneficial, and human-centric compass directing humanity toward universal equality and dignity. While many of these AI principles were quickly defined, the definition of new governance approaches, that are supposed to implement those principles, will be more difficult given AI’s complex and uncertain risk scenario. There is clearly some level of collective distrust in today’s norms and ethics.

Governance is the possibility for collaboration directed by shared principles. Collaboration is necessary, as each stakeholder faces different responsibilities and no stakeholder alone can tackle AI risks in their entirety. However, fundamental political and cultural differences especially between the major economic blocs undermine international collaboration. Even so, collaboration and cooperation will become more urgent in the future to effectively address the risks of AI. Those fundamental differences make the looming ethics and governance gap seemingly insurmountable. Accordingly, the United States is a market foundationalist economy and individualist society following the motif of profit and personal self-fulfillment. The government emphasizes AI as an opportunity for R&D, growth, and job creation. While cybersecurity risks are treated as a liability, AI mainly serves the capitalist ethics of self-directed wealth creation. In contrast, Europe emphasizes solidarity and a human rights approach to AI. According to the European Union, AI should be lawful, robust, and ethical. The mitigation of AI risks is a matter of regulation. In China, harmony and compassion are emphasized as the country’s overarching virtues. For China’s government, data and AI are a means of maintaining or improving stability and discipline through surveillance and control. While Chinese people largely perceive the digital revolution as an opportunity, Western people tend to emphasize the dangers of widespread digitalization. And the Global South seeks its own share of the digital opportunity and tends to advocate digital sovereignty.

Undoubtedly, such polarization risks to omit the many differences within each region and the similarities across all the regions. People in Europe, the United States, and China have become increasingly aware of the privacy and security risks related to ubiquitous digitalization and AI. Governments have hastily sought to create a balance between security and autonomy with the goal to harness the benefits while simultaneously minimizing the risks. Large Internet and AI platforms have been pushed to become more responsible. The big powers face the same challenges, yet they approach them from different ends, which undermines the prospect of international collaboration and governance also in the area of AI. The differences are firmly rooted in the history and cultures of the regions, and they have recently become amplified. The United States and China have especially lost patience with trying to understand one another after a long period of rapprochement. Instead, they forcefully articulate and defend their otherness. China perceives the world as a “community of shared destiny,” while the United States emphasizes “competitive coexistence” as the new foundation of their relationship. The former evokes a utopian harmony; the latter is more realistic, but it sparks tension and lacks a concept for peaceful coexistence.

As a result, today’s global context brings us dangerously close to a never-ending prewar scenario between China and the United States. Both powers are pushing towards the Thucydides Trap. The past globalism of the 1990s and 2000s threatens to turn into a post-global reality, one of competing national globalists repeatedly failing to reach a consensus for the development of a new equilibrium and multilateral order. The disintegration of the World Trade Organization and erosion of the old United States-led order brings us back to an era where “might is right.” It is an era of allegiances and fragmented bilateralisms. It is an era of “chained globalization” with high uncertainty and seemingly uncontrollable risks, where many have lost trust in businesses, technology, and local and global institutions, certainly within the West. Europe has become more “real.” Yet the “European awakening” remains a precarious one as the region will continue balancing between breaking up, heightened xenophobia, sluggish growth, and protecting the “European way of life” but without the capacity for global stewardship. Like the United States, Europe has yet to find an escape path from the growing rift between its “brahmin left” and “merchant right.” Like the United States, Europe fails to represent the struggles and anxieties within its societies. Europe remains sandwiched between a “protectionist” United States, an “aggressive” China, and the rivalry between the two countries. While Europe largely disagrees with President Trump’s personality and approach, it shares the complaints and concerns about China’s increasing dominance and China’s lack of turning more Western. The European Union has also started labelling China a “strategic rival,” but without joining the United States’ unilateral trade war against China. Although the United States seems to fear its future the most, China must try harder to find a way to reduce such fear. For now, further harm is only prevented as each of the three powers is an important trading partner of the other two.

Against such hyperbolized backdrop, it becomes obvious that AI will mainly be used to gain a strategic advantage over other competitors and rivals. Like capitalism, AI is disruptive and lacks the ethics of social good. Therefore, breaking through the current downward spiral and ensuring that technology is primarily used for social and ecological good remains a matter of human agency, collaboration, and cooperation. AI bears its own risks. However, AI is not the primary cause of changing history but a technology with a high risk of intensifying the symptoms of history, such as the unequal attention economy, the surveillance state, and greater power competition. To break through the downward spiral and to prevent an ideologization of the various AI principles, AI doesn’t merely need to serve humanity; humanity needs to change as well. If human history continuously faces backlashes, then AI should not mimic human values and the human brain but learn from and improve both. If we assume that humanity itself is a threat to life on Earth and the goal is to reverse that threat, AI will have a chance eventually to improve human relations and our relationship with Earth. For now, the clash between liberal and political capitalism will intensify. Technology will serve both sides and especially the ranks of a privileged few and the reproduction of that elite into the future.

Thorsten Jelinek is as a director and senior fellow at the Taihe Institute.

Passionate about digital, governance, ethics, artificial intelligence, sustainability, connectivity and China

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store