Environmental and Foreign Affairs in the Context of AI

Human Security Beyond Trump

International Conflict is on its Rise: Out-dated hostilities between the USA and Iran threaten to cause catastrophic chaos. The world needs novel security conceptions that consider artificial intelligence well, first of all: data analytic and platform technologies. How could that look like, in terms of machine understanding – its design, use and ethics? How to put AI in the service of human kind?

2 grüne Papageien mit rotem Schnabel in der Luft
Phaenomena worth of reconsidering security I (symbolic picture) CC-BY-ND 2.0 Bianca

In the last years, the world saw existential breakthroughs in technologies of artificial intelligence (AI). The comic ‚We need to talk, AI‚ illustrates impressively its status-quo: beyond scientific research on logically constructed AI, world system models, or expert systems ongoing for decades, the big break came with high amounts of data and new data analytic, computing power, and the networking things and people, especially by platform technologies. They together realized a human dream; The development of self-learning machines – based upon software that learns from data and hardware able to steer objects – that potentially might take over tasks fulfilled by humans by now. This might include the analysis, assessment, and steering of complex issues. It changes the basics of coordinating society fundamentally. What has to be known to make the best use of these technologies for environmental and security politics, and how could they look like, exactly?

1. What is AI – and if so, how many?

The use of AI for analysis (like in cancer assessment) needs to be distinguished from its use for steering objects, whether in simple, statistical systems (like an automated allocation of patients in hospitals) or complex, eventually dynamic systems (like a platform-enabled coordination of patients and doctors considering special needs in language or laws). But it always depends on:

  • base (data, algorithms, machine learning approaches, models/ considered correlations etc.);
  • mode of operation, security/ safety (performance, vulnerabilities, options for manipulation etc.);
  • goals (of optimization), e.g.: shall an automated allocation of patient’s beds in hospital benefit the patient, the doctor’s reputation, or the occupancy rate of the hospital?

In general, the complexity of such systems, sometimes referred to as ‚algorithmic decision-making systems‘, tends to increase, sometimes even to converge with other systems: Predictive policing used to be based upon the consideration of places or people, for long; Now it might include the analysis of all kinds of data in real-time, like social media data from Facebook. This creates problems in terms of transparency and control that have not been solved, yet, because:

  1. the understanding of AI relies upon auditing processes once developed for static and logically developed systems, which are technically not suited for checking complex and dynamic self-learning systems.
  2. the understanding of AI depends theoretically on novel forms of input-output-analysis – the structured comparison of data given into a system and its results. However, access to data is usually restricted by general terms of services, IT security, or copyright laws, at least for independent research.
  3. technical and legal problems in algorithmic auditing are compound by the severe lack of experts in AI technologies, in general.

Hence, the status-quo is chaotic and might lead to incredible disasters produced by a further characteristic of these technologies, which is given by the international economy behind its development: the collection of data, the development of algorithms and machine learning approaches, and the construction of specific AI systems are globally distributed, with rare regulations or standards, at all. As a result, for example, the German Government was unable to answer how the face-recognition software was developed and trained which it was testing as a pilot experiment at Berlin Suedkreuz Station. In this case, the test was conducted without the definition of indicators for success and caused the question on: What’s the system for?

AI technologies are intended to benefit economic prosperity and participation, growth and climate protection, science and governance, the German government stated in its draft of data strategy, quite recently. Unclear, however, remain subsequent questions: how, in how fare, or: under what conditions – for about 35 years, now, in which digitization was debated and supported by the German state, its industries and amongst its citizens. The problem is not only climate forcing us to reconsider the naive paradigm quite quickly. But in the meantime, developments in networking, data-analyses and platform technologies might have produced a governance system based upon personalisation, scoring, and nudging technologies suited to coordinate people indirectly via setting the context of action, suited to replace traditional modes of state regulation without any democratic control. This thesis supported the assumption of a strong AI – a general AI with artificial consciousness – realized by the internet itself; In line with connectionist proposals in artificial consciousness research.

Algorithmic governance, or rather context coordination of society, realized the idea of a human-machine-interaction resulting into the creation of an informational context that steers human behavior according to the goals of optimization that machine is designed for. However, theories inhere lack empirical proof, lack overview about AI systems in place and interaction. Considering all the bugs and problems of major platforms today, one might debate whether such a strong AI was clever or stupid, then. Decisive, by contrast, seems to be the question: How developments might benefit the public good, how to make ‘her’ smart.

When code becomes law, we need to decide: what kind of law, exactly? When technologies offer incredible opportunities to gather knowledge and to coordinate complex issues, we need to say: What do we want to know and to steer, right now. Examples are given for security and environmental politics below, whereby climate change is anticipated as a major risk for people and machines, now.

2. Security politics in the context of AI

Seeadler sitzt auf Ast und sieht in Ferne
Phaenomena worth of reconsidering security II (symbolic picture) CC-BY-ND 2.0 Dr. Sebästian

Quite recently, the Congressional Research Service published a report on national security and artificial intelligence. Like comparable documents from European counterparts, for example the German Government’s AI-strategy or the European Parliament’s resolution on cyber security, it strikes with continuing old-fashioned security paradigms although AI although everybody knows: AI is a game changer, for good or for bad, and climate change is a major security risk calling for novel solutions. But still, all documents prioritize defense: how AI technologies might advance espionage, mass surveillance, or intelligence; How to optimize military processes and operations, or how to innovate warfare with autonomous vehicles and lethal weapons. Technology developments in security politics remain focused on supporting tools against people – even if transparency and control of AI technologies remains well out of sight. New is the attention paid to the combat of fake news and the creation of individual profiles as a resource for further military operations. The withdrawal of Google from autonomous warfare is high-lightened, in opposition to Amazon.

These single-minded extension of traditional security paradigms with new technologies is a common feature of security actors worldwide, and their national think tanks like the Bundesakademie für Sicherheitspolitik, the Deutsche Atlantische Gesellschaft, or the Clausewitz Gesellschaft in Germany. This is very unfortunate, in terms of AI developments, because it wastes the opportunities of these illumining technologies: Unprecedented options to understand and coordinate complex matters. And it risks human security by ignoring and tackling major threats caused by bad global governance.

2.1 Threat-Analyses, first: Is it people, or climate change?

What is a security risk, why that, and with what kind of implications? These questions are subject to political sciences, especially to security studies. The dominant paradigm since 09/11 is well summarized by Tim Scheerer, concluding:

„Why is terrorism regarded by the US as an existential threat while climate change and other related issues are not? It would answer: terrorism was a security issue because terrorist actors possessed a high measure of aggregate power and had clear offensive intentions, climate change and other related issues were not because they are not security issues.“

Though other scientists question the hegemonic security assessment, security forces, politicians, and government in charge put people or groups of people in the middle of attention. Thereby, they often legitimized the divers repression of people, for example by mass surveillance, espionage and reconnaissance, or (more or less) targeted killing – without being able to prevent terrorist attacks which regularly rely upon missing cooperation between security forces or governmental vulnerabilities, at least within the European Union.

Data-analytic in security politics potentially offered novel methods to assess security and its risks, maybe: climate change, strong social stratification, or something we never considered before? Prioritizing climate change as a major threat, which was still under human decision, implicated new security paradigms for internal and external affairs, for example the development and coordination of a sustainable economy that minimizes climate risks step by step. What was needed, then, were human and machine resources: IT professionals like developer and data scientists, data and technology, lawyers able to build the proper legal framework and so forth. But most of these resources are bound to the media and cyberwar industries: the state’s one, described above, or the private ones, visible in targeted advertisements, political manipulation, or copyright regimes. The re-coordination of these resources, especially human know-how, demanded to answer one essential ethical-normative question: Are all humans intended to survive – or none? As soon as anybody is excluded, politics need to spend rare resources on the exclusion: defense, surveillance, propaganda and so on. A smart security policy was global, integrative and aimed at the survival of all.

2.2 Anti-Terrorism-Measures, second: arbitrary censorship, or democratic platform control?

An all-inclusive security-paradigm does include our enemies, es well, but terrorists pose a special challenge: In scholarly literature, they are defined as „ineluctably political in aims and motives; Violent – or, equally important, threatens violence“, whereby „attention, acknowledgment, and recognition“ are of special importance for these people and organizations that are willing to use violence as a mean in politics. Since 2001, Islamist terrorism used to dominate Western security politics. Justified by the presumed hostility of Western societies to Islam, on the one side of the ideological battlefield, it justified a great number of repressive, so-called security means, on the other side of the global war.

As terrorism poses an essential threat to democratic societies, even the threat of arbitrary violence for ideological supremacy, security forces need to consider online-methods with which terrorists disseminate training information and propaganda, communicate with each other, or organize events. But all methods of content regulation online, that are known today, bear critical violations of human rights, and can hardly be checked when it comes to AI in filter mechanisms. Even worse, focusing the development of AI technologies on detecting terrorist content – or child pornography – is a waste of rare human resources and expertise, and implies a training of AI technologies on the worst ideas, expressions, and actions of people instead of constructive ideas, cooperative behavior, or innovation.

Striking, in addition, is the ignorance of security forces vis-á-vis the impact of social media platforms: Although it is widely known that the manipulation of people for political or other reasons produced elaborated methods of data-based targeting; Although everybody knows that the in-transparent and politicizing platform design in ranking and feeds increases conflicts within societies; Although opaque data-business produced public criticism and formal inquiries in Western parliaments since 2015; No-one in politics or security sciences draw the conclusion, or rather logical consequence: Fighting terrorism online demands public insights into operating principles and bugs of major social media platforms and democratic participation in the development of constructive information and communication designs.

2.3 Security strategy, 3rd: Choice for open, defensive, minimal IT to support people + sovereign human-machine-interaction

Next to people, machines, or rather networked technologies can pose a very special security threat themselves. The issue is usually covered by the terms ‘IT security and safety’, but extends its common understanding. With the networking of people and things comes, hand in hand, the general increase of risks to society: In their far reaching analysis on the ‘vulnerability of information society’, Roßnagel and colleagues shed lights on the diversity of potential risks as early as in 1990, thereby high-lightening

  1. potential technical bugs in hard- and software,
  2. varying sources of bugs in applications and systems, and – related – their potential for damage: in itself, or advanced by the accumulation, multiplication, or pairing of risky effects,
  3. the variety of potential attacks and motives:internal, external, individually, collectively et cetera.

Nowadays, these risks are further increased by the investment-financing of many start-ups that compile old system components without further risk-assessment to create new digital products and tools, or by the use of proprietary software in personal homes and organizations that usually integrate many back doors and vulnerabilities.

Both, complex systems with insufficient security checks, on the one hand, and with incalculable security threats among dispensable add-ons, on the other one, make systems prone to attacks like the one on the Saudi-Arabian oil-industry in September 2019, a result of bugs in several decisive systems. Furthermore, it creates the impression that in case of security incidents, causal analysis needs to consider machinery bugs, in addition to human behavior, whether intended or non-intended. In the area of autonomous, or rather lethal weapons, machinery bugs might have deadly consequences, for example when machines do mistakes in the weather assessment. Therefore, an IT aware security policy does not only focus on the report of vulnerabilities or an advanced coordination of actors involved after security incidents, but the creation of a defensive, functional IT security architecture without trojan horses or vulnerability management. Ideally, the infrastructure is open to review, based upon open data and software. But security incidents might happen anytime, because most systems are to complex to be reviewed completely. Therefore, machinery bugs should be checked in the causal analysis in cases of security attacks – ideally, in advance to mutual allegations between states. Plus, the less AI technologies are used to automate human tasks but instead support them, for example in government and administration, the less attacks might occur.

3. Environmental politics is security politics is smart digital politics: the Eco-Score

If man was to perceive climate change as an essential risk to humanity, we were in urgent need of a sustainable economy, worldwide, that includes material resources to run it (like water, energy, metals and so on) as well as cash, or rather the prevention of future burdens of debts. A simple method to redesign the global economy on providing essential goods and services sustainably and affordably to all was to make the right use of platforms like Amazon. In that they coordinate more-sided markets via data analyles efficiently, they do a deed no state has ever accomplished, so far, potentially on reasonably costs. At the moment, of course, they produce high-level external effects for people and environments, because they seem to be optimized for business benefits only.

One way to realize the potential of platforms like Amazon was to develop a novel tool, the ECO-scoring, that

  1. included ecological risks like the ecological footprint data-based into the ranking and market trends,
  2. optimized the platform and connected markets on suczessive risk reduction, and
  3. transformed the demand and supply for sustainable essential goods and services step-by-step – without further state control, penalties, or dubious subsidies (for its consequences in data politics).

This solution linked competition to the social benefit and solved a long-standing problem of communities focused on cooperation, for the first time in human history: Centrally planned economies are doomed. Any societies, especially communities in transformation, need a steady communication about demands and supplies in an ever-growing production chain to prevent market failures. This was provided only by open market-economies, so far, with its inability to link competition to beneficial goals. But with the potential combination of competitive (market) and cooperative (ranking for human good) elements to structure societies, necessary information and communication might be provided for cooperative communities, as well – an innovation of immense importance: cooperation might be disadvantageous for a single one, but decisive for survival of a group. Hence, technologies that support cooperation without interfering into individual cost-benefit-analyses shared the burden of essential cooperation quietly among all and render cooperation attractive.

Conclusions for the smart society

The examples from security and environmental policies show clearly: It is not the AI itself being smart or stupid but the specific design and use of platform technologies and data analytic. Whether these technologies will end or safe human life depends on political decisions, right now, especially in the realms of security. If politics decide for the management of global risks to allow for the survival of all people, some problems remain:

  1. a sustainable tech development needs an increased access to high-quality data for a diversity of actors, adequate data-management technologies, and proper data-rights, including permission, termination, and rejection of data to be processed, de-personalisation methods, and secure – potentially encrypted – data-processing technologies. Though the German government called for investigating the need recently, human fate will depend on its rapid realization that demands cooperation with big business. Though the focus must be on non-personal data relevant to society, one has to bear in mind the dissolution of the former distinction between personal and non-personal data: If most data is collected via personal devices, most data can be linked to people; All data demand increased security.
  2. a sustainable tech development needs a a new copyright regime that a) provides data-access to data in data bases and pools to a variety of actors, b) provides broad and general access to scientific and legal texts, including collections in data bases, to enable developments like information platforms worth their names, e.g. to inform stakeholder and decision-maker comprehensively and in real-time, and c) allows to flood social media with high-quality content, unfortunately often copyright-protected by now, to re-balance the public sphere in face of hate speech and fake news, as a kind of propaganda rarely copyright protected by default – thereby rendering expensive and risky filter technologies irrelevant and returning rare human and tech ressources to the power of society.
  3. a sustainable tech development depends upon a basic income: the examples inhere already show clearly that jobs and professional activities need to change radically if data analysis and platform technologies are to be designed and used in a reasonable and beneficial manner. People fear for themselves and their children to not be able to care for an adequate living. They need a basic security in life to take transformative steps necessary, to maybe retrain themselves, or to look for new jobs or professional roles. Employees of unemployment offices, for example, might support people in that instead of coordinating penalties (Germany).
  4. a sustainable tech development needs an equal, transparent, and efficient cooperation of all relevant stakeholders, at least: a) a composition representing politics and government, national and international economy, and civil society, b) an efficient coordination of all communities on global scale, and c) a transformation of democratic politics, including knowledge for political decisions, by digital tools like data analysis and platform technologies, scoring and ranking.

Du möchtest mehr kritische Berichterstattung?

Unsere Arbeit bei netzpolitik.org wird fast ausschließlich durch freiwillige Spenden unserer Leserinnen und Leser finanziert. Das ermöglicht uns mit einer Redaktion von derzeit 15 Menschen viele wichtige Themen und Debatten einer digitalen Gesellschaft journalistisch zu bearbeiten.

Mit Deiner Unterstützung können wir noch mehr aufklären, viel öfter investigativ recherchieren, mehr Hintergründe liefern - und noch stärker digitale Grundrechte verteidigen!

Unterstütze auch Du unsere Arbeit jetzt mit deiner Spende.

 

Unsere Arbeit bei netzpolitik.org wird fast ausschließlich durch freiwillige Spenden unserer Leserinnen und Leser finanziert. Das ermöglicht uns mit einer Redaktion von derzeit 15 Menschen viele wichtige Themen und Debatten einer digitalen Gesellschaft journalistisch zu bearbeiten.

Mit Deiner Unterstützung können wir noch mehr aufklären, viel öfter investigativ recherchieren, mehr Hintergründe liefern - und noch stärker digitale Grundrechte verteidigen!

Dann unterstütze uns hier mit einer Spende.

0 Ergänzungen

Wir freuen uns auf Deine Anmerkungen, Fragen, Korrekturen und inhaltlichen Ergänzungen zum Artikel. Unsere Regeln zur Veröffentlichung von Ergänzungen findest Du unter netzpolitik.org/kommentare. Deine E-Mail-Adresse wird nicht veröffentlicht.