DR. STELLA LEE – CRYSTAL BALLING WITH LEARNNOVATORS

In this exclusive interview with Learnnovators, Dr. Stella Lee shares her insights on the changing nature of workplace learning in relation to technological innovations such as Artificial Intelligence and robotics.

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

DR. STELLA LEE – CRYSTAL BALLING WITH LEARNNOVATORS

ABOUT DR. STELLA LEE:

Dr. Stella Lee brings over 20 years of international experience in e-learning, blended, and distance learning strategy, design, development, project management, and implementation. Stella holds a PhD in Computer Science with a focus in adaptive e-learning design, and a post-graduate certificate in adult learning and teaching, both from the University of Hertfordshire in England; a Master Degree in New Media Communications, and a Bachelor of Fine Arts in Studio Art. She has conducted post-doctoral research on learning analytics with iCore Research Lab at Athabasca University.

Stella has served as subject matter expert in evaluating e-learning standards for the United Nations’ International Atomic Energy Agency (IAEA), and has advised on instructional design and e-learning standards and governance, educational technology evaluations, and learning analytics with organizations such as London Probation Office, Government of Alberta, Athabasca University, City of Kitchener, City of Edmonton, Safety Codes Council of Alberta, WorkSafeBC, TransLink, University for Industry (UfI) in the UK, Open University UK, and University of Calgary. Stella is Canada’s startup advisor, an international speaker and a writer. She currently serves as the learning technology columnist for the Training Industry Magazine and is on the executive board for the Learning Development Accelerator. Stella runs a consulting company called Paradox Learning, and is based in Calgary, Canada.

ABOUT THIS INTERVIEW SERIES:

Crystal Balling with Learnnovators is a thought-provoking interview series that attempts to gaze into the future of e-learning. It comprises stimulating discussions with industry experts and product evangelists on emerging trends in the learning landscape.

Join us on this exciting journey as we engage with thought leaders and learning innovators to see what the future of our industry looks like.

THE INTERVIEW:

1. LEARNNOVATORS: As we see, technological innovations such as Artificial Intelligence (with its ‘arsenals’ such as machine learning and deep learning) and robotics are all set to disrupt industries across the world. As a practitioner who carries vast experience in this domain, what does the modern workplace learning landscape look like in the age of intelligent machines? How is L&D presently faring in leveraging its power to make learning more impactful and measurable and thereby help address business priorities? What is the trend you see, and what needs to change (if any)?

DR. STELLA LEE: While none of us can predict how AI will impact our workplace or shift the nature of work, we have already experienced how AI is integrated in many aspects of our lives from face recognition software to voice-activated conversational agents and recommender systems. In workplace learning, we are seeing more of these applications that are focused on supporting or improving specific tasks rather than replacing an entire job function.

Broadly speaking, the most commonly used AI applications for learning are:

  • Coaching and knowledge sharing – typically in the form of smart chatbots. A chatbot, by definition, is a computer program designed to simulate conversations with humans through websites, mobile apps, or wearable devices anywhere, anytime. AI technologies are used to enable chatbots to speak naturally (i.e., human-like) and to pick up contextual information. When designed thoughtfully, chatbots can take over the task of providing foundational concepts, “listening” to any concerns from users, and answering commonly asked questions. This can free up time for L&D professionals so we can focus on more in-depth conversations as follow-ups, and to identify where further interventions are needed.
  • Content curation and recommendation – a lot of content we consume is personalized, and many learning platforms now offer the ability to select, organize and recommend material based on particular attributes of a learner or a group of learners. While content curation can be done manually, it is a time-consuming and tedious process. Using AI algorithms, relevant content can be sourced, processed, and combined in many different ways for different learners to provide a personalized experience. I foresee that as content curation and recommendation become more commonplace and automated, L&D will move to a quality control and evaluation role, overseeing the recommendations and adjusting content as needed.
  • Adaptive learning – typically in the form of dynamically (in real time) generating content, feedback and assessment as each learner progresses through his/her learning. Adaptive learning technology could potentially consider many factors such as learner’s prior knowledge, content preferences, performance (what the learner is currently doing vs. what he/she has done in the past), job roles, content preference, and other data points to determine what to adapt and change for each learner. As you can appreciate, this could get complex fast, and I think this technology really requires L&D to take an active role in ensuring the factors considered are pedagogically grounded and that we can audit and validate them regularly.
  • Learning analytics – last but not the least, AI technologies are also used to analyze patterns, create models, and to predict learner behaviors and their performance outcomes. The goal of using learning analytics is to help us understand, optimize learning, and provide early interventions to learners. It can also be used to identify where to increase learning support and resources. Many LMSs now have some form of learning analytics built-in, and L&D are increasingly being asked to be the interpreters of learning data and learning trends.

With various AI applications and tools flooding the market, I think there is some confusion as to which tool does what – and the difficult task of how do we go about comparing and evaluating them. Many L&D professionals are becoming more tech and business savvy in engaging with AI vendors, and I would like to see us acting as a bridge between IT and business units in communicating and advocating the use of such tools for learning and performance support purposes. Within many organizations, there appears to be some pockets of innovation; however, they tend to lack a unified approach. I think that here is an opportunity for L&D to provide some strategic direction to help manage the change, and to connect technological investment with upskilling and reskilling the workforce.

Finally, while AI technology is advancing rapidly, we are far from (if ever) computers achieving general human intelligence across domains and context. In the foreseeable future, I believe that the modern workplace will be us working in tandem with intelligent technologies and making decisions on what are the best uses of AI for workplace learning – and I hope with a lot more diversity than we currently have, and that means more women, more visible minorities, more global perspective, more people from different subject expertise, and across different job functions.

2. LEARNNOVATORS: To quote Donald H Taylor from our recent interview with him, “It would be tempting to say that L&D professionals should get skilled up in AI, but I wouldn’t suggest this. It’s a deep, complex field and it makes more sense to recruit specialists with AI skills, or partner with them, or work with the increasing range of AI tools for L&D.” We too believe that we shouldn’t be really worried about the implementation of AI; rather we should learn to see how best how we can leverage its tremendous power to make learning a much better and natural experience. What are your thoughts on this topic? What skills do we need to develop for a better understanding of how AI can help us augment learning?

DR. STELLA LEE: While I agree that we don’t all need to be data scientists (unless that you are greatly interested in the subject, then by all means!), I think at the baseline level, we ought to have a fundamental understanding of AI and data. As more and more powerful AI technologies are embedded into our daily life, the benefits from being AI literate are manifold: for individuals to be more informed about the products that are AI powered and ask relevant questions; for businesses to be able to take advantage of AI and be more competitive; and for government to make better decisions and policies on AI regulations.

For L&D professionals, getting skilled up in AI could encompass a range of knowledge and skills including user experience design, policy and regulations, auditing, testing, and evaluation of systems, cross-cultural learning design, language translation, and applying ethical considerations. Many current AI applications that help augment learning are still more focused on technical aspects of incorporating AI than on the pedagogical models that underpin their use. What is more, once these AI applications are implemented, users often lack a shared standard on how to use them and companies often lack guidelines, best practices, and policies. There is also very little research in this area on how AI can support and impact learning, particularly in the workplace learning context. These are but a few of the areas that L&D can take an active role in leading.

3. LEARNNOVATORS: For people to learn in the flow of work, you feel we need to ‘leverage learning technologies that support personalized and adaptive learning, and build on the learners’ existing technical, transferrable skills and experiences.’ For this, you propose to leverage existing digital platforms and apps that people already have installed on their phones. One interesting example that you give is instant messaging apps to quickly guide people through a new process or a challenging set of tasks. From your experience, how open is L&D to this idea of using tools not designed specifically for learning? What would you suggest for L&D to have the needed shift in mindset for this switch from traditional tools and methods?

DR. STELLA LEE: Technology will only be more interconnected with the advent of Internet of Things (IoT) so leveraging existing digital platforms and apps is already happening in our daily life across different domains. For example, voice assistants such as Amazon’s Alexa and Google Home have made their way into our homes so we can personalize our music, set reminders, and control our environment. These same assistants are also beginning to be integrated at workplaces for managing some of the more routine tasks: scheduling events, taking phone calls, and updating calendars. In some schools, voice assistants are now being used as learning aids by providing dialogues with students on specific subjects and topics. The truth is, many of us are already using everyday technology to learn, even though these tools are not explicitly designed for education. In a workplace learning survey conducted by Jane Hart, consistently the top tools for learning each year have been tools that are not designed specifically for learning. Yet, they are platforms that are familiar to us, easy to use, usually have a great user experience and a relatively flat learning curve.

To answer your question about how open L&D is to the idea of using these tools, I don’t necessarily think that it is a matter of openness. Rather, I suspect that in workplace learning, we have been so pigeonholed into a one-system-fits-all mindset, partly because the learning management system is an expensive investment (therefore all learning needs to go into the system to justify the cost), and partly because the platform vendors are pitching it as the be-all-and-end-all for digital learning. So, for us to get out of this mindset, we first need to think outside of the LMS, and to think of learning not as a course (or even worse, as a SCORM package), but rather as many moments and nuggets of experiences both informal and formal. We ought to figure out what we want people to learn, and then go about selecting and trying out tools that are best suited for the types of learning in a specific context.

4. LEARNNOVATORS: Traditional e-learning has been about command and control, thanks to its tight design. However, we believe that new technologies such as chatbots can help a course break free of its closed/tight navigation design. Our experience in this regard has also shown that chatbots can help us ‘pull’ what we need (‘just enough’) when we need it from a vast repository of information in the most natural way – by chatting. What, according to you, is the potential for conversational-based learning in bringing learning to where employees really are (into the flow of their work)? How do you think chatbots will evolve to help us make learning more both humanized and contextualized?

DR. STELLA LEE: I remember you shared Project Disha with me a while back and I enjoyed exploring the potential of how a chatbot can be embedded into a course, so thank you for that!

While chatbots are a promising technology for education, conversational-based learning (or dialogic education) is actually one of the oldest pedagogical approaches. Its roots can be traced back to many ancient educational traditions including the famous Socratic Method, established by Socrates, as a form of cooperative argumentative dialogue. In the East, roughly before 3rd century BC, the teaching of Buddhism heavily relied on monks who would memorize the teaching, relate that to others orally and train the learners in verbal thinking, reasoning, and evaluation of the material. So as you can see, the conversational-based learning approach is nothing new, but I am glad to see how it has come full circle with the innovative use of technology nowadays.

By definition, chatbots are digital systems that we can interact with through text or voice commands using human language (as opposed to computer language). These chatbots can be integrated into a company’s intranet, Learning Management Systems (LMS), smart phones, stand-alone devices, and even everyday appliances (it could even be the photo copiers in your office!).

In terms of what chatbots can do for workplace learning, let’s think about what employees need in the flow of their work. Depending largely on their roles and responsibilities, employees might need to access information quickly (company policies, government mandates, internal forms, etc.), ask questions, fact-check, get feedback, brainstorm ideas, translate languages, analyze options, get recommendations, and identify people who can help them (mentors, coaches, subject matter experts, etc.). These are some of the functions that chatbots can potentially support without the employees having to leave their work areas or interrupting their workflow. 

According to one research report, pedagogical chatbots have three main roles: learning, assisting, and mentoring. For the role of learning, chatbots can support employees by providing continuous feedback and help improve their skills; for assisting, chatbots can simplify employees’ life by looking up relevant information; for mentoring, chatbots can support self-regulated learning and meta-cognitive learning.

As to the future of chatbots, the technology is nowhere near where we can replace humans for contextual, nuanced, adaptive, and adaptable conversational learning. As the technology matures, I see more opportunities for chatbots to cater to personalized learning, by being able to remember your previous choices and tasks; and to be able to offer options, to scaffold knowledge and skills as you become more competent. Another area I hope that would humanize the experience is for chatbots to have cross-cultural awareness, in terms of cultures from other countries as well as culture from within a particular organization.

5. LEARNNOVATORS: According to you, learning analytics is going to completely change the way we do learning. You say, “One in ten organizations don’t make use of the data they collect.” You attribute it to the ignorance of L&D about WHERE TO START when it comes to analyzing the learning (big) data. What, according to you, will L&D miss out from not using learning analytics? How do you think Redefining Workplace Learning Analytics – your LinkedIn Learning course – on workplace learning analytics will help L&D really get started? What is the current landscape of learning analytics? What is the future? What would be your message to learning professionals, especially those who are not well-versed with learning analytics, to leverage data in their learning and performance support solutions?

DR. STELLA LEE: Actually, it is more complex than that. Lacking the know-how is only one of the challenges in applying learning analytics. In fact, there are many other challenges – the perceived roles of L&D (often being perceived as not IT-related, or non-technical in nature), the lack of access to data as other departments act as gatekeepers, silo work structure, management not willing to support and allocate resources, and general workload issues. So there are quite a few barriers to get started!

Despite these challenges, I am a big proponent of learning analytics, since it helps us make better evidence-based decisions and provide relevant interventions in education. It is with this motivation that I share my knowledge and experience with others, including my course in LinkedIn Learning. I hope that it will give people a baseline understanding and a general awareness of the topic, enough that they want to explore further, and be sufficiently inspired that they will find ways to overcome the barriers.

By definition, learning analytics is “the measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs”. Overall, it is a broad and evolving field and it encompasses a range of qualitative and quantitative measures such as course feedback, log files from various learning systems, content navigational patterns, bounce rates, learner progression rates, and other learners’ activities data. At work, we have already dealt with an increasing number of technologies, and collectively, these technologies generate a large and growing amount of data. With these data available at our disposal, here is an opportunity for us to collect, interpret, and gain an understanding of how we can better support our learners and provide a better learning experience.

As learning professionals, we should strive to find out if the learning we designed and delivered has any impact, where the areas of improvement are, or how to provide any kind of actionable feedback. These are but a few key reasons why we ought to make use of learning analytics. What makes learning analytics more compelling now is the fact that lean operations will be the norm for many companies. L&D departments will need to justify expenditures and optimize resources even more. Learning analytics can help you decide if you need to redesign the training material based on data collected about how learners interact, or if you need to renew licenses for curated learning content based on usage pattern, or introduce additional L&D supports such as coaching and mentoring based on sentiment analysis.

In terms of the current landscape and the future of learning analytics, its importance will continue to rise. Over the past year and a half, due to the pandemic, many organizations have unexpectedly pivoted their training and performance support online, and many new types of learning platforms and applications emerged, responding to the market demand. With the increase of e-learning activities come large repositories of learning-related data. Many companies have already seen the benefits and promises of learning analytics in supporting decision-making, detecting patterns, and making predictions. We are also seeing that more and more educational technology vendors have started to include some elements of learning analytics in their products such as data visualization tools, learner dashboards, content popularity indicators.

Moving forward, I would encourage us to think more broadly about learning analytics and its potential applications. Think about what kind of data and insights we need to better support collaborative learning, self-regulated learning, on-the-job performance, and employee engagement. Another area we need to consider is ethics and privacy within learning analytics. The protection of learners’ personal data and privacy should always come first. We also need to be held accountable in how we interpret the findings (don’t confuse correlation with causation), check our own biases, and check to see if there are biases in the data samples.

For those who are just starting on learning analytics, I would suggest that you begin with getting yourself familiar with data. Identify and examine the type of data available at your organizations, especially data collected from learning activities and on-the-job performance. Begin with the end in mind – what questions do you want answers for? Can the data help answer these questions? Then you can plan how you can go about collecting the data, what tools would you use to analyze them, and how would you interpret and communicate your findings. Keep in mind that having data does not mean having the answers – you need to make the connection and interpretation. Data doesn’t speak for itself. We need to create meaning and tell stories with it.

6. LEARNNOVATORS: Though the pros of AI outweigh the cons, one of the key challenges that we are facing is algorithmic bias. To quote Shalini Kantayy, the Director of Coded Bias – a film that seeks to explore how bias gets encoded in the algorithms that decide the destinies of millions of people, “…in the realm of artificial intelligence, there has always been a blurring between what’s real and what’s imagined.” According to many reports, algorithmic bias has started getting into our education systems too. In this context, what are some of the key issues that we need to consider while adopting AI for learning and development? What are your recommendations to ensure ethical development of learning algorithms without biases? In short, what would be your message to imagine a more humane use of AI in learning?

DR. STELLA LEE: Love the film Coded Bias!

Indeed, algorithmic bias has started getting into our education systems as well as other systems. In fact, algorithmic bias is only one of many biases we face when adopting AI for L&D. First, we need to deal with bias in the data itself. As you know, for AI to work effectively, we need to feed the system with a lot of data. For example, for a recruitment software to analyze and suggest who the “best fit” job applicants are for a certain job, the system might use certain past candidate attributes as key indicators for success. In the case of executive-level professionals, this past dataset likely consists of predominantly white middle-aged males. If we are unaware of this bias while providing that dataset to the system, the result will skew the system’s preference toward this demographic group.

Once we deal with bias in the input data, then we need to pay attention to algorithmic bias. An algorithm is simply a set of rules developed by people (usually computer programmers) for computers to implement. Just like anything created by people, who these people are and what ideology, assumptions, cultural beliefs and unconscious bias they have, the algorithms they create will inadvertently pick up and embed these biases into the systems. For example, Amazon had an AI hiring tool and its algorithm has a preference for words that are more commonly used by male applicants. In some extreme cases, the system penalized women for simply having the word “women” in their resumes (as in “attended women’s college”).

As you can see, the consequences are vast when it comes to bias in AI. For me, some of the key questions to ask when implementing AI for education are:

  1. Are we able to look under the hood and audit the systems? You might have heard of the term “algorithm black box”. It refers to the general inability for people to see inside a system and understand how it arrives at a decision (e.g. when the system makes a prediction about how well I will perform on a certain task at work, what assumptions is it making? What inferences about users are drawn from data without the users knowing or allowing these inferences being drawn?)
  2. Are we stifling learner agency? By suggesting content to learners, learning content recommender systems are shaping and “nudging” the learning direction in a particular way. It potentially runs the risk of being too prescriptive and taking some of the agency away from learners and diminishing serendipitous learning opportunities.
  3. Does it violate user privacy? Data can be collected or shared without the consent of the learners; and once data is stored in a system, it can also be leaked or become de-anonymized.
  4. Do we have policies in place to address any ethical concerns? Does the company you work for have any kinds of AI ethics governance in place? What are the guiding principles to ensure the effective and ethical use of AI for workplace learning and for performance support?
  5. How knowledgeable is my team/my stakeholder groups who evaluate, purchase, and implement these AI systems? I think it is important to ensure that the team and stakeholders you work closely with are data and AI literate. They need to be able to critically evaluate and ask informed questions and to look at the impact of AI systems from a range of perspectives.

As you can appreciate, there are really no easy solutions to ensure the ethical development of learning algorithms without bias – as the technology evolves and as our understanding of the technology also changes. What I am happy to see is that there is an increased awareness and collective efforts from across disciplines to provide some solutions. No one solution will adequately address this problem. But collectively, I believe some of the solutions listed below can help safeguard machine bias and to ensure a more humane use of AI in learning (and in all other fields).

  1. Explainable AI – This is an approach to counteract the algorithm black box. Explainable AI or XAI programs aim to enable users to understand and provide input on the decision making process in order to improve algorithmic accountability. In the case of dynamically generated learning paths, it would be helpful to have a learning platform that is explicit about the decisions it makes to recommend/not recommend certain learning paths or options over the others, and for learners, course designers, and instructors to review and update as they see fit.
  2. Privacy by Design – Essentially, Privacy by Design advocates that organizations consider privacy at the initial design stages and throughout the complete development process of new products or services that involve handling of personal data rather than as a bolt-on fixer at the end of the implementation cycle. 
  3. Governance and Policy Setting – Some form of oversight on the ethical use of AI is needed to safeguard transparent and auditable use of AI systems. For example, the Algorithmic Impact Assessment (AIA) tool could be used to determine the impact level of an automated decision-system.

To sum up, I would encourage each of us to take an active role in educating ourselves about the issues concerning bias and privacy, engaging with the decision-making process by interacting with AI vendors, asking questions about how data is collected and used, what assumptions they are making about the learners and users, and demanding our organizations to put regulatory frameworks in place.

7. LEARNNOVATORS: It is great to hear you say, “With some thoughtful consideration, we can be leaders in impacting change for a better learning experience”. It is exciting to see you on this inspiring journey to help drive change in the way learning is designed for people at work. Like you, we too are excited to visualize the future of learning; it looks very bright. As an L&D futurist, what are the trends that will shape the future of workplace learning in 2021 and beyond? How, according to you, will L&D evolve to meet this future? And what is your vision for the learning community in the context of AI?

DR. STELLA LEE: There is really no easy answer to that, and there are many trends that potentially shape the future of workplace learning. The way I see it, L&D has always had evolving roles (at least for the past 20 years or so) and technology has increasingly been one of the key driving forces behind it.

To the best of my ability, here is a combination of trends and hopes that I think will shape L&D and our workplaces:

  • L&D will have more specialized roles – I think it is fair to say that we have been asked to wear more and more hats these days: instructional designer, online facilitator, LMS administrator, learning data interpreter, external vendor manager, video and multi-media designer, etc. Looking ahead, I would expect (and hope!) that there are opportunities for L&D to specialize in a niche area without spreading ourselves too thin. This will be particularly true for larger companies or government organizations when it is often easier to develop in-house capacity and have more people in L&D. Getting back to my earlier statement that everyone needs to be a data scientist or AI expert, it certainly would help to have one member of the L&D team able to work with AI learning application vendors and ask informed questions about how AI is being used to support learning.
  • Niche expertise will be outsourced – as workplace learning evolves and emerging technologies are coming out faster than we can catch up, I anticipate that more organizations will have the need to outsource niche learning design, particularly in VR/AR/XR, and anything that requires a relatively steep learning curve or to test out a new technology before investing internally.
  • Learning technology systems will need to be more integrated with a better user experience – maybe this is my wishful thinking, but the way we have been using traditional, siloed learning systems designed for tracking packaged online courses need to be a thing of the past. I believe that L&D teams will have a closer working relationship with IT in ensuring that any evaluation and implementation of learning technology will be learner-centered and educational rather than technologically driven.
  • Learning will be more personalized – as more and more technologies and tools make use of AI to make learning recommendations and adjustments, we will have more data to inform us in how we can best support our employees, and also how to use personalization to empower them rather than as a surveillance tool. L&D will evolve to play a key role in safeguarding learner privacy and advocating the ethical use of technology.

LEARNNOVATORS: Before we sign off, we thank you so much for your time today, Dr. Stella. We’ve had an amazing time reading your insights with many valuable takeaways. We’ll take these learnings to foster our commitment to practice and promote continuous learning and innovation at work. Thank you!

(Visited 1,083 times, 1 visits today)

More To Explore

Instructional Design

INSTRUCTIONAL DESIGN BASICS – GOALS

This article emphasizes the importance of goals in instructional design. A goal, at the macro level, answers the WIIFM for the business. Broken down into a more micro level, it defines the specific actions learners need to take to reach the goal. This article focuses on the macro, business, goals and lists the characteristics of a good goal. It also discusses how to derive a good goal from a bad one by asking probing questions.

Instructional Design

EMPATHY – FRIEND NOT FOE IN INSTRUCTIONAL DESIGN

This article makes a case for empathy in instructional design. In doing so, it refers to another piece that distinguishes between human-centered-ness and empathy in the design of things.

REQUEST DEMO