Intelligence Augmentation and Learning and Development

AI Learning DevelopmentArtificial Intelligence (AI) – cognitive computing, machine learning, chatbots, etc – is hot. Everyone’s talking about it, whether hype (Watson does Dylan!), diligent consideration (where does AI fit in our strategy?), or ethical concerns (what about the jobs?). But some folks are already talking post-AI with a spin: it’s not about AI, it’s about IA. IA, Intelligence Augmentation, is an important perspective, particularly for Learning and Development.

Intelligence Augmentation, as the name suggests, is about using technology to work with us. The phrase ‘the whole is greater than the sum of the parts’ is relevant here. To understand this, we need to put together an understanding of what we do (and don’t) do well with what technology (including AI) can and can’t do. With this background, we can thoughtfully examine the opportunities.

Cognitive Limitations

Our brains have led us to our current state of existence (with all the good and bad therein). With language and then representation, we have developed the ability to leapfrog evolution and change our environment with our own thoughts and actions instead of genetic adaptation. We can develop physical augmentation to allow us to transcend our limitations and work at scales unimaginable without such aids. We can travel to uninhabitable realms, redefine the landscape, and communicate across distances.

Our brains are powerful pattern-matchers. We can distinguish things in the world and recognize language patterns, and contemplate complex constructions. These artifacts of our architecture come at a cost. The same cognitive system that recognizes patterns and abstracts meaning tends to skate over details and be limited in the number of things that can be kept in mind at any one time. In short, we’re bad at rote memory and complex calculations. We don’t do ‘detail’ well.

In fact, it appears that what we actually do, and what we think we do, aren’t very correlated. An outcome of our architecture is that our knowledge is compiled away, inaccessible to introspection (data shows that 70% of what experts do is unavailable to their conscious inspection). Instead, we create stories about what we do. We use rubrics and models for learning to help us observe and correct errors, but what our brains actually do isn’t obvious. For one infamous example, the important task of ‘chicken sexing’ (determining the gender of newly hatched chicks), has never been successfully taught except by trial and error. We simply can’t articulate the rules that guide the successful performance!

Overall, there are specific ways in which our cognitive architecture doesn’t mimic the myth of formal logical thinking. The gaps include a very short span of information in sensory memory, so we can’t process it all before it’s gone (we must attend to those areas we want to comprehend), problems controlling our attention so we’re susceptible to distractions (think: blinking gifs at the periphery of a web page we’re trying to read), limited working memory (failure to remember all the digits in a phone number we’re trying to dial), errors in accurate repetition (the errors that determine the outcome of sporting events), and more. There’s clearly opportunity to provide support for our thinking.

Technology

There are actually many technologies we use. In practice, paper and pencils, whiteboards, post-it notes, and even conversation are technologies for learning and doing.  Their digital equivalents have some tradeoffs (e.g. they may require power or connectivity), but they also add some wonderful new capabilities, such as persistence, distribution, and more.

In general, digital technologies have some very special properties. They’re general purpose, in the sense that any process you can specify can be built. If you can describe it, you can develop it. And while this requires a sufficient understanding to document the exact behavior you want, if you can then you can be assured of a reliable repetition without fatigue or angst.

It turns out, then, that digital technologies in particular are the perfect complement to our cognition. Digital technologies can repeatably execute rote steps reliably. They can also remember arbitrary details at the level and duration needed. In short, they do well what we don’t, and vice-versa.

AI in particular has been touted as the new panacea. However, a careful examination of what’s real and not identifies particular strengths and weaknesses. Artificial intelligence either is a formal representation of reasoning (symbolic AI) or an approximation of our brain operations (e.g. neural nets) that may be mysterious about what they’re actually doing. And they have limitations: they’re only as good as your reasoning engine and the quality of the data you use to either train them or ask them to process.

In short, as Leo Cherne has been quoted: “Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant;…”. What’s important is his extension to this statement: “…together they are powerful beyond imagination.” This is what Intelligence Augmentation is all about.

Augmenting Intellect

What does this mean in practice? We’ve evolved many cognitive supports over the years. Writing was a way to document thinking in exact detail, such as sales tallies and ritual steps.  Audio and video recordings captured exact happenings, with all the details we can’t remember. Checklists (e.g. preflight inspections), lookup tables (e.g. train schedules), and product offerings (e.g. catalogs) all take the burden off exact and voluminous recall. Whiteboards allow collaborative editing, and phones allowed remote communications. It’s about annotating the world with information that supplements our own memory.

Digitally, we’ve got more powerful versions of the tools we’ve already developed. We can have electronic and adaptive checklists, decisions support tools with mixed initiative engagement, browse- and search-able databases of information, and more. We’re distributing tasks across the two architectures, cognitive and computational. For example, Foldit is a digital game that allows humans to find protein foldings that computers can’t (and computers have found ones that humans have trouble finding). Essentially, we’re supplementing our processing as well.

We can also be contextual: we can show you what other people like you have also looked at or enjoyed, by tracking your and their behavior and correlating. We can annotate the world because of when and where you are. Augmented Reality allows us to provide information about the world around us. Similarly, we can scale the world in ways that we can’t with our own senses: expanding or shrinking time and space to show micro or macro views of processes and systems.

Artificial intelligence has carried this further. It can look for data in ways humans can’t (and vice-versa), so we can use machine learning to find correlations we can’t see (by trial and error). If we can specify a behavior, we can make a system that does it reliably and repeatably. We can make artificial agents that can understand natural language and respond in limited domains. They can answer questions on travel, technical trouble-shooting, and more.

So, what does this mean for L&D?

L&D Implications

The key focus is to design for hybrid systems. Don’t think you have to put all the information in the head or in the world. Look at tasks that must be performed, and figure out what can be in the world and what has to be in the head. Then, first design the world, and then train around that solution. To get concrete, design your tools and then train on using them. This means going beyond just courses and looking to a full suite of solutions.

Practically, this means different things for our learners, mentors, and our own role. First, we need to think of people as performers, not learners. Learning is a continual ongoing process in this brave new world, and we want to think about augmenting performing and learning. It also means thinking about augmenting our mentors, and shifting their roles from information providers to facilitators. And thus our role becomes one of performance consultants and facilitators too.

How can we support their needs?

If we’re not putting information in the head as our only solution, it means we also need to be looking at performance support. In fact, if organizational needs can be met by a tool instead of a course, they should be. Our learners will still need courses to develop completely new skill sets, but they can be augmented with performance tools, and supported by access to resources instead of providing them. Chatbots? Adaptive guides? Contextual support?

Our mentors will need support in thinking about tools as well, and as facilitators of learning, not providers. One of the recognition of this approach is that information in the head has flaws, and either requires rigorous courses or tools. The latter is to be preferred, as it’s both easier and typically more effective if called for. And whatever their expertise, domain or learning, they’ll need support for the other half of their area of expertise!

This holds true for our own practices. How can we use technology to ensure we look at the broader picture, and be rigorous about design whether it’s courses >or job aids. And how can we collaborate and innovate systematically to continuously improve our own work?

This new focus will mean we need to partner differently, going forward. We’ll need to be involved in tool design and delivery. This crosses operational business silos and the IT group. Yet this is working in ways aligned with how our brains work. And we should not look to replace people, but instead use them in partnerships with smart tools to provide the necessary oversight. AI is tremendously dependent on the quality of the dataset, and increasingly areas of responsibility cross domain boundaries which computers are bad at. Look to integrated human/technology systems.

This isn’t new: the recognition of the necessary shift in L&D that’s being increasingly seen and heralded. The concept of intelligence augmentation is just one more way to look at how people can work best, and be supported to achieve optimal outcomes. There are arguments from culture and business outcomes as well, but it’s time to put it all together for the organization’s success. And yours.