Since its release in November 2022, ChatGPT has been the talk of the town… digitally speaking. Developed by the American company OpenAI, this “generative pre-trained transformer (GPT)” in the form of a conversational interface is considered the most powerful of its kind. Powered by the most sophisticated form of artificial intelligence, it surprises, amazes and worries with its performance in “cognitive” tasks that until now could only be performed by a human brain. While some see it as a revolutionary technology, it is clear that its accessibility to the general public raises major issues, particularly in terms of security and ethics. Here are a few notes to help you decipher it.

Powered by generative AI

ChatGPT is a so-called “generative” artificial intelligence (AI), i.e. a type of AI capable of analyzing and understanding text requests, called “prompts,” and responding to them by generating original content (text, images, videos, music) formatted to our needs (style, number of words, etc.). This AI does more than simply retrieve content from existing data, as current search engines do. To achieve such a performance, it had to be trained with a very large amount of data; in the case of GPT-4, this training was done with all public Web content published up to September 2021. The version of ChatGPT released in November 2022 is known as GPT-4 since it is the fourth generation of generative pre-trained transformers developed by OpenAI.

Using training data and a hyper-powerful mathematical system, generative AI can understand a complex query and statistically predict the best possible response. This is why we’re talking here about AI using a probabilistic technique. A generative AI can thus, for example, generate a very elaborate answer to a request to produce “a 1500-word essay on the changing role of the teacher throughout history, focusing on technologies that have entered the school environment, in the style of author X and including five quotations.” And since this AI has a memory, it can add new details to its query to refine its response. It’s worth noting that it was the development of deep learning in the second half of the 2010 decade that boosted the performance of generative AI tenfold.

Large multi-modal language model

As a conversational interface, ChatGPT lets you interact with a technology that has been around for several years, known as “large language models” (LLMs). An LLM is an algorithm that analyzes large quantities of textual (or other) data and learns the language models underlying that data. It can then use these patterns to understand and produce new content. In “large language model,” the word “large” refers to the fact that this is the latest generation of the language model, with larger artificial neural networks developed on gigantic datasets containing an impressive number of parameters.

The term “multi-modal” has now been added since databases of various types (natural language, mathematical and computer language, images and audiovisual content) have been combined, and a single model, as in the case of ChatGPT, can process and produce multiple forms of content. Such a model can thus not only analyze, synthesize, evaluate, and generate text but also solve mathematical and scientific problems, code or design in most technical and non-technical disciplines, and even produce music and video (OpenAI, 2023; Google Research, 2023).

Artificial general intelligence?

Has ChatGPT achieved a form of general intelligence, as suggested in the study by Microsoft – one of the largest investors in ChatGPT and OpenAI – entitled Sparks of Artificial General Intelligence (Bubeck, 2023)? Its lead author, AI researcher Sébastien Bubeck, clarifies which GPT-4 capabilities he and his colleagues are relying on to make such a claim.

“The central demonstration of our research is that GPT-4 achieves a form of general intelligence, even showing hints of general artificial intelligence. This is demonstrated by its core mental abilities (such as reasoning, creativity, and deduction), the range of subjects on which it has acquired expertise (such as literature, medicine, and coding) and the variety of tasks it is able to perform (e.g. playing games, using tools, explaining itself…).”

This publication is, however, controversial, as reported in the New York Times article Microsoft Says New A.I. Shows Signs of Human Reasoning by technology journalist Cade Metz, which we paraphrase below:

“Some AI experts saw Microsoft’s article as opportunistic claims about a technology that no one really understands. They object that general intelligence requires a knowledge of the material world that GPT-4 theoretically does not possess. ‘Sparks of Artificial General Intelligence article is an example of what some big companies do: put a PR pitch in the form of a scientific article,’ says Maarten Sap, researcher and professor at Carnegie Mellon University in Pittsburgh. Bubeck and Lee admit that they weren’t quite sure how to describe the system’s behaviour and titled their paper Sparks of Artificial General Intelligence to capture the imagination of other researchers. Alison Gopnik, professor of psychology and AI researcher at the University of California, Berkeley, says that GPT-4 – like other such systems – is undoubtedly powerful, but it’s not clear that the text generated is the result of anything like human reasoning or common sense. ‘When we see a complicated system, we anthropomorphize: everyone does it, those who work in the field and those who don’t,’ said Gopnik. ‘But always comparing AI and humans, like in a game show competition, that’s not the right way to approach the question.'”

Biases and fabrications

While ChatGPT impresses with its multiple capabilities, it also has its limits. Although some of its “cognitive” abilities mimic those of human intelligence, it lacks sentience (the ability to reason and feel) and metacognition, which means it can’t take a critical look at what it produces and, therefore, can’t correct itself. As a result, its answers are likely to be coloured by various biases (racist or sexist prejudices, favouring or disfavoring certain political affiliations, etc.) or to include erroneous results, which have been termed “fabulations” or “hallucinations.” This is because ChatGPT’s modus operandi, which ensures that it generates the most probable answer (as do other generative AIs), leads it first and foremost to provide an answer, even if these are erroneous results or pure inventions.

And since ChatGPT doesn’t mention its sources, it can be difficult to confirm the veracity of some of its answers. In addition to these weaknesses, we must also bear in mind that AI poses risks to the security of personal data and privacy. Users must ensure that they do not provide any sensitive information to ChatGPT since OpenIA clearly states in the conversational agent’s FAQ that interactions with it will be used to train the company’s future AI models. Users are, therefore, responsible for ensuring they do not divulge sensitive information.

Catherine Meilleur

Author:
Catherine Meilleur

Communication Strategist and Senior Editor @KnowledgeOne. Questioner of questions. Hyperflexible stubborn. Contemplative yogi

Catherine Meilleur has over 15 years of experience in research and writing. Having worked as a journalist and educational designer, she is interested in everything related to learning: from educational psychology to neuroscience, and the latest innovations that can serve learners, such as virtual and augmented reality. She is also passionate about issues related to the future of education at a time when a real revolution is taking place, propelled by digital technology and artificial intelligence.