10 August 2022

Intelligence is dead: long live Artificial Intelligence

Yasmin Afina

The press has widely reported claims made by a Google engineer, recently placed on ‘administrative’ leave, that its AI chatbot called LaMDA ‘has become sentient’ with an ability to express and share thoughts and feelings the same way a human child would. This claim has been met with interest from the public, but also a lot of scepticism.

However, the promotion of over-hyped narratives on AI technologies is not only alarmist, it is also misleading. It carries the risk of shifting public attention away from major ethical and legal risks; framing the technology in a way that would lead to dangerous over-confidence in its reliability; and paving the way towards ethical- and legal-compliance-washing.
Inherent risks of AI

From helping to advance research in cancer screening to supporting efforts at tackling climate change, AI holds tremendous potential in enabling progress in all segments of society. This is assuming that the technology is reliable and that it works in the intended way.

It also rests on the assumption that both developers and end-users are aware of the technology’s limitations and its inherent legal, ethical, and societal implications and risks; and that they are actively seeking to mitigate risks of harm.

Yet, there is an unhelpful tendency, from the media and a segment of the AI community, to present hyperbolic accounts of these technologies’ nature and capabilities. This trend subsequently results in derailing the public attention on the already-pressing ethical and legal risks stemming from these technologies and their subsequent harms.

In fact, there is an established and growing body of literature documenting the associated risks of harm from these widely deployed and pervasive technologies. These include sexism, racism, ableism, the centralization of power, lack of clear democratic oversight and accountability, and violation of privacy.

For example, large language models, like Google’s LaMDA hold the potential to assist with tasks such as supporting customer services with chatbots, enhancing translation services at much higher speed and with greater accuracy, or even helping to collect and synthesize key patterns and findings from large sets of research papers in specific, specialized subjects like science and the humanities. However, they also carry inherent ethical and legal risks.

One of the most widely discussed risks relate to the absorption of hegemonic worldviews and (harmful) biases reflected in these models’ outputs: text collected from the internet in US and UK English were found to overrepresent white supremacist, misogynistic and ageist views. If this data is used to train large language models, they will inevitably reflect these harmful biases.

Another key risk emanating from large language models relates to their deliberate misuse to generate and disseminate disinformation and misinformation campaigns, thus leading to serious risks of harm – ranging from harm on individuals to democratic processes (e.g., electoral fraud).

A third, under-researched but serious example of risk from large language models relates to the acceleration of language loss. Most of these large language models are developed and trained in English and other dominant languages: the more they are deployed and the more pervasive and dependable they are, the more they will lead to the undermining, or even erasure of indigenous languages which, arguably, echoes colonization-era and assimilation policies.

The importance of narrative

The serious implications AI technologies have in real life shed light on the importance of a sensitive and critical approach to the way they are presented. Sensationalist and hyperbolic framings are not only out of touch with reality, they also have the following dangerous implications:

First, there is a risk of over-confidence in these technologies which, subsequently, could lead to over-reliance and a lack of appropriate critical oversight. The capabilities and reliability of AI technologies may be (unintentionally) exaggerated, leading to an under-appreciation of their shortfalls.

This is particularly problematic in light of limited technical literacy in the public policy sphere and among end-users in widely deployed technologies. For example, over-confidence in autonomous vehicles and navigating systems can result in serious, or even lethal risks for the safety of drivers and passengers.

Second, the anthropomorphizing of AI can mislead policymakers and the public into asking the wrong questions when the societal stakes are so high. This tendency is, to a certain extent, understandable, especially in the mainstream media and news (e.g., Terminator) as it makes the concept of AI more ‘relatable’ than the intangible, code-riddled programme that more closely reflects the reality of AI.

This is even more the case as progress is proceeding apace in the field and some could argue that artificial intelligence is getting closer to artificial general intelligence – in other words, programmes with similar levels of general intelligence to those of a human. This approach, however, can mislead the public and policymakers into asking the wrong questions and contributing to sterile debates (e.g., legal responsibility of robots) instead of meaningful discussions (e.g., criminal responsibility of developers in the case of fatal accidents).

Third, hyperbolic accounts on AI, coupled with the anthropomorphizing of AI, sets the threshold high on what would constitute technologies ‘of concern’: this could lead to an overlooking and an under-appreciation of the actual level of ethical and legal risks inherent to the technology in question.

For example, in the military sphere, both the terms ‘killer robots’ and ‘lethal autonomous weapons systems’ (LAWS) have been the product of such stigmatization that, as a result, technologies that fall under their high threshold will be overlooked. China has expressed its view that ‘LAWS should be understood as fully autonomous lethal weapon systems’ – which therefore excludes all technologies that are not ‘fully autonomous’.

Yet, it has been rightly argued that ‘the overwhelming majority of military investments in AI will not be about lethal autonomous weapons, and indeed none of them may be’. This does not mean, however, that these systems that fall below the threshold of ‘sentient’ killer robots/LAWS are any less problematic; for example those programmes developed for mass surveillance and intelligence collection, and those used to inform potentially lethal decision-making.

The need for reframing discussions

Without reframing the discussions surrounding AI, actors, including states and big tech companies, will – advertently – steer clear of meaningful discussions surrounding the legal and ethical implications of the technologies they are developing and deploying, including under the pretext that they fall under the high threshold of technologies ‘of concern’ (e.g., sentient).

Distraction paves the way towards ethical and legal compliance washing practices, where major stakeholders set up processes meant to address key ethical and legal risks but remain opaque, lack an appropriate level of democratic oversight, and do not ultimately address the inherent risks of these technologies.

Inaction would ultimately leave space, and time, for these technologies to continue to cause harm particularly on vulnerable populations. Instead:Key stakeholders, both from the public and private sectors, should encourage and facilitate meaningful and realist discussions on the development, deployment and use of AI technologies and their societal implications and inherent risks. This must be done with diversity, inclusion and intersectionality at the heart of the deliberations and involving stakeholders from all levels. Collaborative and multidisciplinary research incorporating perspectives from minorities and marginalized populations, and viewpoints from beyond ‘the West’ is key to enable such reflection.

There must be greater willingness and openness to re-evaluate the role of these technologies on the power relationship between governments and corporations on the one hand, and the population on the other hand; identify privileges of those benefitting from these technologies and conversely harm on others; and adopt the appropriate and adequate policy and solutions.
The limitations and assumptions behind the technologies developed and deployed must be clear to end-users and all potentially impacted by their use. This transparency is critical in light of greater human-machine teaming and as we grow increasingly dependent on AI. More generally, there must be open and accessible mechanisms of verifying claims made about AI development.

Fostering technical literacy among policymakers will help inform meaningful decision-making and prevent the derailing of focus resulting from over-hyped and sensationalist representations of AI. On the flipside of the coin, there must also be greater effort at incorporating ethics and law to inform and shape the development, testing and deployment of AI technologies, which current research in machine learning is in dire need of.

No comments: