You’re alone in front of your computer, in the middle of an online learning session, and it doesn’t involve any human interaction, either with a teacher or with other learners. Are you likely to interact with your computer as if you were social partners, as you would with your peers in a traditional classroom? In light of the latest findings in social and emotional neuroscience, it seems quite plausible. Not only is this discovery surprising and interesting in terms of what it tells us about human behaviour, but it also offers us a new angle from which to explore and improve the digital learning experience.

This is the subject of one of the chapters in “Emotions, Learning, and the Brain: Exploring the Educational Implications of Affective Neuroscience” by Mary Helen Immordino-Yang, Associate Professor of Education, Psychology and Neuroscience. Let’s take a look at the main points Immordino-Yang and her colleague Vanessa Singh address in this fascinating chapter entitled “Perspectives from Social and Affective Neuroscience on the Design of Digital Learning Technologies.”

Emotional and social neuroscience on learning

First, a word about neuroscience. It’s a multidisciplinary field that studies the nervous system, from neurons to behaviour and draws on a wide range of disciplines, from biology to chemistry, mathematics and computer science. This field is itself divided into several branches or sub-disciplines, including affective neuroscience and social neuroscience. While the former is concerned with the behaviour of neurons in relation to emotions, the latter aims to understand social processes and behaviour through biological mechanisms.

Among other things, this field of knowledge has led to discoveries that have had a significant impact on our understanding and approach to human cognition and learning. In particular, affective neuroscience has confirmed that emotions and rational thought are inextricably linked, just as body and mind are, contrary to Descartes’ dualistic postulate that imposed itself for a long time in the past.

“Far from divorcing emotions from thinking, the new research collectively suggests that emotions, such as anger, fear, happiness, and sadness, are cognitive and psychological processes that involve the body and mind (Barrett, 2009; Damasio, 1994/2005; Damasio et al., 2000). […] Overall, affective neuroscience, together with psychology, is documenting the myriad ways in which the body and mind are interdependent during emotion, and therefore, the myriad ways in which emotions organize (and bias) reasoning, judgment of self and others, and retrieval of memories during learning (Immordino-Yang and Damasio, 2007),” say Mary Helen Immordino-Yang and Vanessa Singh.

For their part, social neuroscience has shed new light on the fundamental biological mechanisms that underlie the way we learn as social beings. The psychologist Albert Bandura was the first to highlight the fact that learning by observing our fellow human beings, not only to imitate but to surpass their model, is a process inherent to human development and behaviour – a process known as modelling or vicarious experience. As fundamentally social beings, humans define and develop themselves through their relationships with others, and this aspect cannot be overlooked when it comes to learning. “Further, educators have long known that thinking and learning, as simultaneously cognitive and emotional processes, are carried out not in a vacuum but in social and cultural contexts (Fischer & Bidell, 2006). A major part of how people make decisions has to do with their past social experiences, reputation, and cultural history,” Immordino-Yang and Singh point out.

To be more precise, this field of neuroscience has focused in particular on the role of social emotions in learning, social emotions being emotions related to the actions, feelings or thoughts of others. Their development during education enables each individual to adapt to the group, which in turn ensures their relative survival. These emotions include love, friendship, empathy, curiosity, solidarity, guilt, self-criticism, shyness, shame and so on. Social emotions are likely to play a key role in learning.

“Related to this, the physiology of the social emotions that govern our interpersonal relationships and moral sense appears to involve dynamic interactions between neural systems for bodily sensation and awareness – the same systems that are known to be involved in the feeling of basic emotions, such as anger, fear, and disgust – and systems that support other aspects of cognition and emotion regulation, including regions involved in episodic memory retrieval and perspective taking in relation to the self (Harrison, Gray, Gianaros, & Critchley, 2010; Zaki, Ochsner, Hanellin, Wagner, & Mackey, 2007). […]  The cross talk between these neural systems suggests that social emotions endure, guiding our decisions, ongoing engagement, and learning. Moreover, the data suggest that these emotions may get their motivational power through coordinating neural mechanisms responsible for complex computations and knowledge with mechanisms that facilitate retrieval of our own personal history, all the while colored by reactions played out on homeostatic regulatory systems that, in the most basic sense, keep our bodies alive and our mind attentive,” summarize Immordino-Yang and Singh.

The computer: the learner’s social partner?

The two colleagues are among the researchers who are using this new data to propose new ways of designing and improving learning. In their case, Immordino-Yang and Singh hypothesize that human beings may not be all that different when they find themselves in an elearning context without human interaction than when they learn in a traditional classroom. After all, as they summarize, “new advances in social and affective neuroscience are making increasingly clear that humans use subjective, emotional processing to think and to learn.” Why should the learner use a completely different kind of processing in front of their computer, when their thinking and learning mechanisms remain the same and, what’s more, they tend to anthropomorphize their digital tools?

“Many people think of digital learning experiences as nonsocial, as long as the student is interacting with the media on his or her own. Here, we turn the tables and suggest that many people may interact with their digital tools as if they were social partners, even when no other humans are involved. Thinking of digital learning as happening through dynamic, supported social interactions between learners and computers changes the way we design and use digital technologies for learning — and could help shed light on why we become so attached to our devices,” Immordino-Yang and Singh argue.

In fact, neuroscience has uncovered internal mechanisms that demonstrate that when we interact with our fellow human beings for learning purposes, we use an empathic process to understand them. “Now, social neuroscience is revealing some of the basic biological mechanisms by which social learning takes place (Frith & Frith, 2007; Mitchell, 2008). According to current evidence, social processing and learning generally involve internalizing one’s own subjective interpretations of other people’s feelings and actions (Uddin, Iacoboni, Lange & Keenan, 2007). We perceive and understand other people’s feelings and actions in relation to our own beliefs and goals, and we vicariously experience these feelings and actions as our own (Immordino-Yang, 2008). Just as affective neuroscientific evidence links our bodies and minds in processes of emotion, social neuroscientific evidence links our own selves to the understanding of other people,” explain the two researchers.

But if the learner retains this empathic process in their interactions with the computer to find their way through their learning, how can their human analytical grid be of any use to them, when their counterpart, even if they anthropomorphize it, has very little in common with them, including in the strict sense of information processing? “Human cognition, or the faculties for processing information, applying knowledge, and making decisions, differs in important ways from information implementation and computation by computers. Most importantly, human information processing is driven by subjective and culturally founded values,” say Immordino-Yang and Singh. “Put another way, we humans are capable of both top-down and bottom-up strategies of attending and information processing; our cognition involves decomposing or breaking information into its composite parts, as well as piecing together and integrating information into more complex representations (Immordino-Yang & Fischer, 2009). […] As biological beings, a central part of explaining how we do things lies in explaining why we do them,” they add.

In the age of generative artificial intelligence

Let’s take a closer look at artificial intelligence (AI) in its “generative” form, that form capable of generating content itself, which has been impressing as much as it worries since the release at the end of 2022 of ChatGPT, the best-known tool powered by this technology. Using training data and a hyper-powerful mathematical system, generative AI is able to understand a complex query and statistically predict the best possible response, whether it involves solving a mathematical or scientific problem, coding, or producing text, image, audio or video. Currently, AI still lacks some of the skills needed to compete with us on every cognitive terrain. If it fails to reason and plan, inhibit its biases, understand how the world works, or perform on the motor level as well as a human or animal, some experts firmly believe that these gaps will eventually be filled. However, the idea of comparing artificial intelligence – a misnomer in many people’s eyes – to human intelligence is obviously a matter of considerable debate. A few exchanges held as part of the “Intelligence Revolutions” series at the 2023 Rencontres de Pétrarque illustrate the extent to which AI could revolutionize our lives and our view of humanity. During this debate, Yann LeCun, researcher and scientific director for AI at Meta, who is also one of the fathers of deep learning, the branch of AI that has enabled advances in the field in recent years, goes so far as to assert that this technology could eventually experience emotions. According to him, the day we have “machines [AI] that manage to plan their actions, imagine the results of the sequences of their actions and that have goals to satisfy, they will inevitably have emotions”. Here is  how Mr. LeCun perceives this technology, its evolution, and, in parallel, how he perceives the human brain:

“Yes, these systems do billions and billions of multiplications, additions and comparisons. It’s all very simple in algorithmic terms, if you want to talk about algorithms. What makes them complex is what they are capable of learning. They are indeed trained to make statistical forecasts at the present time. But in a way, the same can be said of our brains. They can be reduced to biochemical operations. So, at a low level, these are relatively simple operations. The brain is a machine; machines are machines. There’s no reason to think that in the more or less near future we won’t have machines that don’t have all the intelligent characteristics of humans or animals. They don’t have them at the moment, it’s true, we mustn’t confuse the issue, we’re still a long way from it, but there’s no doubt that it will happen. No doubt at all.”

Anne Alombert, a philosopher and lecturer at the University of Paris 8, who specializes in the anthropological challenges of contemporary technological transformations, proposes a completely different vision, which, it should be pointed out, does not imply any notion of a mystical order:

“I don’t believe the brain is a machine at all. The brain is part of an organism, and an organism is not a machine. A living organism is a whole that cannot be reduced to the sum of its parts. This means that you don’t make a living organism by assembling different pre-existing parts, if you like. The organism is formed by differentiation, in other words, it has nothing to do with a machine. What’s more, the matter, the materiality of the organism are organic structures. So they’re not inorganic or inert structures like what we call “machines” – but in fact the term “machine” isn’t really appropriate anymore, because what we have here are networks; digital technology is all about networks. […] – It’s a big debate, but for me what’s really important is not so much the debate as the difference between calculation on the one hand and interpretation on the other. And I believe that interpretation is the hallmark of the living. Not human, but living. To live is to interpret in a context, in a situation, it’s to transform oneself by interpreting the situation, and that can’t be mechanized, it can’t be programmed, because it’s a constant invention, if you like. So I believe that interpretation cannot be reduced to calculation.”

Mr. LeCun’s response to Ms. Alombert’s comments leaves no doubt as to the irreconcilable nature of these two positions: “Everything, absolutely everything, can be reduced to calculation. From a theoretical point of view, there are only two kinds of calculation that exist: classical calculation and quantum calculation; there’s no reason to think that the brain uses quantum calculation, so it’s more classical calculation, and theory tells us that we can simulate any generic computer on any other. […] So the debate as to whether the mind, as we understand it, can be simulated no longer exists among scientists”.

Beyond the “calculating machine” … fascinating human complexity

Questioning our human uniqueness, even going so far as to suggest that AI could end up experiencing emotions, Yann LeCun’s vision is troubling, to say the least, and difficult for many of us to conceive. Before we know whether the future will prove him right about the evolution of artificial intelligence, it’s worth highlighting the dazzling advances of generative AI and explaining how they give rise to thoughts that, until recently, were the stuff of science fiction. It’s also important to note that, while advances in generative AI will make it possible to improve the adaptive functions of digital learning, so that they respond ever more effectively to the individual needs of learners, we still need to continue deciphering how these learners function – who themselves are living beings, unique, sentient, social, imbued with culture, with a lived experience, a Self and an Ego (as described by social psychology), goals, dreams, etc. And even if everything were reducible to the individual, we’d still have to decipher how these learners function. And even if everything were reducible to calculation, let’s appreciate the formidable complexity of the “platform” each human being develops to understand others – and to learn – as described by Immordino-Yang and Singh:

“Humans are born with the propensity to impose order, to classify and organize our environment in accordance with our individual ways of theorizing about and acting in the world. The content of these theories and actions is the result of interaction among biological, social, and cultural life experiences. As children develop, they encounter new experiences that shape and reshape existing neural networks and schemas and impact their cognitive, social, and emotional development. Because of this, the hardwired patterns of neural connectivity that underlie innate functional modules, such as those that facilitate social evaluation, are dynamically sculpted by social and cultural experiences as they are subjectively perceived and emotionally “felt”. In short, our personal experiences through development provide a platform on which to understand and relate to the thoughts and actions of other people.”

As the two researchers explain, the neurobiological dynamics underlying this human “platform” cannot be compared to the purely cognitive information processing that AI currently produces, and it is precisely this that needs to be taken into account to better adapt digital learning to multidimensional human cognition.

“To see what we mean, let us return to the neurobiological evidence presented above concerning the relationship between the body and the mind. If the feeling of the body (or simulated body) during emotion can shape the way we think, which ample evidence suggests that it can, this shaping would happen via the sensing of the body, or via perception. However, such sensations are not of equal importance. Rather, sensations are assigned valence, starting with pleasure and pain and growing from there in complexity. Even the simple visual perception of objects or situations in the environment is understood in terms of its propensity to cause harm or good in relation to the current situation and context. In turn, we respond accordingly to maximize good and avoid harm, as we subjectively perceive and understand the consequences. depending on the context, these responses can relate to our well-being in a basic survival sense, or in a more evolutionarily evolved, sociocultural sense. Taken together, these appraisals, values, and sensations lead to what we traditionally call cognition. Quite literally, and as the term emotions suggests, we are “moved by” the valences we assign to perceptions (or simulated perceptions), and in this way, our perceptions and simulated perceptions “motivate” us to behave in meaningful ways (Immordino-Yang & Sylvan, 2010). Although a purely cognitive account of information processing describes perfectly the computations that govern artificial intelligence and embodiment (in the form of mobile robots’ behavior), this from our perspective represents a fundamental rift between artificial and biological intelligence that must be dealt with in the design of interfaces that facilitate useful interactions between the two.”

Adapting digital learning to human cognition

Although artificial intelligence can outperform the human brain in the purely cognitive processing of information, it does not yet have the range of human cognitive skills. However, this in no way prevents learners from seeing their computer or digital learning environment as a potential social partner, which they will need to decipher in order to interact with it effectively and productively. More concretely, if the learner were to complete an exercise, they would need to understand the purpose of that exercise, relate that purpose to their own skills and experiences, and then translate their skills into commands that the computer could understand. However, when the computer interface is poorly adapted to human cognition, the learner’s experience is likely to be frustrating, demotivating and even disengaging. According to Immordino-Yang and Singh, the complexity of the brain-computer relationship is as follows:

“In a learning traditional classroom, each student brings her unique goals, knowledge, and decisions that have been shaped by her social and cognitive experiences and that she must learn to use empathically to understand the teacher’s actions, whether the teacher is a person or a computer. […] Using computers and other technologies to learn and perform tasks presents the student with the challenge of mentally discerning and reconstructing actions with often times invisible goals and procedures. Not only do these processes depend upon knowledge of how computer work, but they also vary with the student’s subjective, emotional, and personal history and with her present interests and goals. […] Here we suggest that perhaps one of the main difficulties that humans (and especially computer novices) have with computer interfaces is that the humans have trouble anticipating and understanding what the computer will do and why – in effect, because we have never lived as a computer, we have never lived as a computer, we have trouble “empathizing” with them and sharing their processing state, the way we would naturally strive to do with another person.”

Could the solution lie in designing computer interfaces that mimic human and emotional skills? This is not what the two researchers advocate, any more than exposing learners to the technical underpinnings of computer operation. Immordino-Yang and Singh’s preferred approach is to improve interfaces so that they meet the criteria for genuine interaction between social partners. To enable learners to engage with the computer more fluidly, they argue, the objectives and motivations of the digital learning environment should, in particular, enable them to grasp as clearly as possible the purpose of the program.

“For the actions and responses of the digital interface to be perceived as useful and productive, and for novice learners to effectively engage the digital learning environment as a collaborative partner, digital media designers might consider ways to make human-computer exchanges more akin to good social encounters: the goals should be transparent, the computer partner’s actions should be predictable and related to the subjective needs of the human learner, and each partner in the exchange should have an appropriate share of the control.”

This last idea, of giving both the learner and the computer an appropriate share of control, is also in line with good social practice. However, the two researchers also draw on the fact that it has been shown among university students that having some internal control over the content, context and pace of their learning, helps them to believe in their success and promotes engagement in learning. Immordino-Yang and Singh go even further, suggesting that giving learners some control over important features of their digital learning experience could alleviate a notable gap in this relationship.

“Related to this, because computers do not have emotion, why not find ways that the human user can supply the emotion-relevant features to the human-computer interaction by giving the person some control over the critical aspects of how the interface and environment look, feel, and behave?”

It should be added that, according to the two researchers, fostering a balanced two-way relationship between learner and computer would be far from insignificant in terms of the learner’s self-perception – we can no doubt speak of that crucial feeling of self-efficacy – as well as the usefulness they perceive in their collaborative partner.

“Drawing from this, it seems crucial for learning technologies to be designed such that they do not give the students using them a sense of reliance or dependence on the machine, but instead foster a sense of agency that empowers the student to master skills that he could not have managed without computerized assistance. Engaging the student in an interaction rather than in a unidirectional manipulation by one conversational partner or the other (where either the person or the machine drives), students may be more likely to productively interact with the digital learning environment and to use it to facilitate performance.”

**

Based on fascinating discoveries made possible by affective and social neuroscience, Mary-Helen Immordino-Yang and Vanessa Singh propose a unique way of approaching the human-computer relationship. A timely view, as digital learning asserts itself as a path of education of the future for young people as well as adults, and as artificial intelligence, which is integrated into this mode of learning , is evolving at warp speed. However, no matter what the next AI skills are, so that instructional designers can create programs that are best suited to human cognition, that are as stimulating and engaging as possible, it is essential to deepen our knowledge of the learner.

Thanks to neuroscience, we now have a better idea of the biophysiological mechanisms underlying human cognition; we know, among other things, that our rational thinking is inextricably linked to our emotions, and that social emotions, notably empathy, play a role dominant in learning. By inviting us to consider digital learning as the result of continuous dynamic social interactions between the learner and the computer, and to improve it so that it more closely resembles good social processes, these researchers guide us towards a vast field of possibilities that we now have to seriously explore. By wanting to bridge the gap between neuroscientific expertise and that in digital educational design, Mary Helen Immordino-Yang and Vanessa Singh remind us to never lose sight, through the frenzy of technological development, of the most important factor: the factor human.

Sources:
  • Immordino-Yang, Mary Helen; Singh, Vanessa, “Perspectives from Social and Affective Neuroscience on the Design of Digital Learning Technologies”, in Emotions, Learning, and the Brain: Exploring the Educational Implications of Affective Neuroscience, New York, W.W. Norton & Company Inc., p. 181-190, 2016.
  • Episode 1/5 : Qu’est-ce que l’intelligence, “Les révolutions de l’intelligence” series, Les Rencontres de Pétrarque, France culture, July 10, 2023.
Note: Some quotations have been freely translated
Catherine Meilleur

Author:
Catherine Meilleur

Communication Strategist and Senior Editor @KnowledgeOne. Questioner of questions. Hyperflexible stubborn. Contemplative yogi

Catherine Meilleur has over 15 years of experience in research and writing. Having worked as a journalist and educational designer, she is interested in everything related to learning: from educational psychology to neuroscience, and the latest innovations that can serve learners, such as virtual and augmented reality. She is also passionate about issues related to the future of education at a time when a real revolution is taking place, propelled by digital technology and artificial intelligence.