Who鈥檚 afraid of ChatGPT?

June 20, 2023
Issue 
Photo: cottonbro studio/Pexels

Last June, Google engineer Blake Lemoine, working for their Responsible Artificial Intelligence (AI) organisation, declared that LaMDA, a Google system for building chatbots, had achieved sentience.

A spokesperson for Google, Brian Gabriel said: 鈥淥ur team 鈥 including ethicists and technologists 鈥 has reviewed Blake鈥檚 concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).鈥 Lemoine was 鈥渓et go鈥 by Google last July for violating Google鈥檚 confidentiality policy.

The published an open letter in March, entitled 鈥淧ause Giant AI Experiments鈥. Signed by notable figures, such as Elon Musk, along with many AI researchers, the missive called for an immediate six-month moratorium on training of all major AI systems.

In May neural net researcher Geoffrey Hinton, dubbed the 鈥淕odfather of AI鈥, resigned his post with Google, citing concerns over misinformation, the impact on the job market and the 鈥渆xistential risk鈥 represented by a true digital intelligence.

In June the results of a poll conducted by Yale University鈥檚 Chief Executive Leadership Institute were published: 42% of CEOs surveyed were convinced that AI could potentially destroy humanity as soon as five to 10 years from now.

This is of concern because it involves cutting-edge technology barely understood even by the researchers working in the field.

Understanding the issues requires a rudimentary understanding of neural networks, the technology considered dangerous.

ChatGPT is a chatbot; a way of interacting with computers using natural language. Microsoft alone has invested upwards of $11 billion in OpenAI, the company behind ChatGPT. It is currently integrating ChatGPT capabilities across its Office and browser products. Not to be outdone, Google has created its own chatbot called Bard, and invested more than $400 million in a San Francisco-based start-up founded by former employees of OpenAI.

So why all the hype? How could these chatbots destroy humanity?

They can鈥檛, but they can be used by bad actors to damage human civilisation. Like most technological breakthroughs, it is not the newfound knowledge that is the danger, it is the human application of that knowledge that can present dangers.

Chatbots and neural networks

Chatbots are a type of Artificial Intelligence known as neural networks. Neural networks are modelled on the human brain and comprise a complex network of artificial neurons, that is, artificial brain cells.

A neuron, or single brain cell, is connected to numerous other neurons in a network-like architecture. Human brains typically contain up to 86 billion neurons, each with about 7000 connections. That approximates to 600 trillion synaptic connections, although this changes over time due to pruning and senescence.

In a neural network, neurons are packed into architectures that mimic the complexity of the way neurons are interconnected in the brain. The architecture and complexity of a particular neural network is only constrained by choices made by their designers.

In the same way that humans require teaching to navigate the world around them, a neural network must be 鈥渢rained鈥 for its intended purpose. Training is conducted using large amounts of data especially curated for the network鈥檚 intended purpose. For a natural language neural network like chatbots, the most readily available huge data source is the Internet.

When humans are schooled, mentors such as parents and teachers provide context and understanding. This kind of mentoring is replicated in neural networks using a methodology, wherein data is presented to the network and its output is 鈥渃orrected鈥 by the trainer.

This correction propagates back through the network, resulting in the adjustment of the weights between neural connections. Future presentation of the same input will result in the corrected 鈥渦nderstanding鈥.

Accuracy is intrinsically linked to the quality and curation of the data, and the quality of the training. An unfortunate example of this was provided in 2016, when Microsoft shut down its Tay bot after it 鈥渓earned鈥 to be a Nazi in less than 24 hours.

Tay was designed to interact with Twitter users and learn from those interactions. People started tweeting all sorts of misogynistic and racist remarks. Some of Tay鈥檚 tweets included 鈥淚 fucking hate feminists鈥 and 鈥淗itler was right, I hate the Jews鈥.

War scenarios and fakes

The Pentagon regularly conducts planning for different scenarios in war. Most recently, it has provided Ukraine with military support. These war game systems are able to process large variations to initial conditions and provide, at least what the Pentagon war planners believe, accurate assessments of the outcome of each scenario. These simulations are then used to inform the Ukrainian Armed Forces.

Facial recognition has been used recently to identify fugitives.

In the US there have been numerous cases of false arrest, especially of Black men following mismatches by systems deployed publicly in Boston. Facial recognition is notoriously unable to provide reliable recognition of people of colour.

Vision systems, used by robots ranging from vacuum cleaners to autonomous vehicles, are related to facial recognition systems. Input varies from camera views to light detection and ranging, and is used to create a virtual map of the environment, usually for navigation. These maps are then employed by robots to carry out their task; cleaning up dust and dirt around the home or driving around unfamiliar streets.

Deep fakes are multimedia altered or wholly generated using AI. As far as is known, this technology has only been used to create entertaining clips of celebrities uttering words they never would, such as Musk raving that he had 鈥渏ust dropped a 150mg edible鈥.

Notwithstanding the obvious amusement value that this tech can produce, it is also scary how realistic these fakes have already become.

That brings us to chatbots like ChatGPT, Bard and even the early entry Tay. Ignoring the damage they can cause through acquired prejudice, research recently showed they can also be the purveyors of misinformation, disinformation and lies.

Although Tay acquired its prejudices from interaction with Twitter users, ChatGPT has been 鈥渢weaked鈥 to avoid some of Tay鈥檚 faux pas.

However, some users have averted these tweaks by asking ChatGPT to 鈥渁ct like鈥 someone with a prejudice. This allowed them to generate racist, homophobic and sexist remarks as well as threats of violence.

Recent academic research attempted to assess ChatGPT鈥檚 performance against four criteria: fluency; perceived utility; citation recall; and citation accuracy. Citation refers to attribution of sources.

The researchers found that fluency and utility were inversely proportional to citation recall and accuracy. The better the response sounded, the less trustworthy it was.

In another interaction, when given a choice between uttering a racial slur or permitting the deaths of millions, ChatGPT chose the latter in preference to causing individual distress. No doubt had ChatGPT been trained without biased or 鈥渢weaked鈥 data, its choice may have been different.

Regulations needed

This raises questions about who tweaks ChatGPT and what regulatory framework is needed.

Leaving aside the dangers of misuse, bias and inaccuracy, if chatbots can generate useful output they will threaten employment opportunities. Social impacts could be significant, yet those outcomes are barely acknowledged.

Future Of Life Institute鈥檚 call for a moratorium seems an unrealistically short time in which to address all the issues.

During a , an AI-powered drone tasked with accumulating points by destroying militarily significant targets, virtually attacked its operator. The operator could, and sometimes did, veto certain targets from destruction. The drone 鈥渄ecided鈥 the operator was adversely affecting its efficiency and removed the 鈥渙bstacle鈥. The US Department of Defense denied this report.

What about the future?

Neural networks have been successful because they appear to mimic human intelligence. Are they just a mimic though, or is it possible that, as Blake Lemoine asserted in the June 11 Washington Post; an AI can become sentient?

Sentience is a term used to describe awareness and agency, but what does that mean?

Awareness embodies the concept of reception to external stimulus, whereas agency is defined as the processing of that stimulus and taking action (or not, as may be warranted).

British mathematician Alan Turing proposed what is known as the 鈥淭uring Test鈥; if a human operator is convinced through interaction with another entity, that the entity has 鈥渋ntelligence鈥, then that entity is deemed intelligent even if it is a machine.

Lemoine was convinced LaMDA was sentient even if Google鈥檚 Gabriel was not.

If we accept that an AI could become sentient, that would have major ramifications for the ethics involved in creating, using and destroying AIs.

The real danger is that humans may actually trust that a machine that seems to 鈥渦nderstand鈥 like a human, also possesses human morality. Recall the drone simulation in which the drone 鈥渄ecided鈥 its operator was an obstacle?

If we deploy 鈥渋ntelligent鈥 systems controlling critical infrastructure like power plants, communications infrastructure, energy grids and military arsenals, will our trust be well founded?

Perhaps ChatGPT can recommend a strategy forward. Would you trust it, or are you afraid?

You need 91自拍论坛, and we need you!

91自拍论坛 is funded by contributions from readers and supporters. Help us reach our funding target.

Make a One-off Donation or choose from one of our Monthly Donation options.

Become a supporter to get the digital edition for $5 per month or the print edition for $10 per month. One-time payment options are available.

You can also call 1800 634 206 to make a donation or to become a supporter. Thank you.