Instructional design

25 minute read

AI Literacy and Data Ethics with Dr. Stella Lee | Podcast Ep. 5

Laurel Buckley

Laurel Buckley

In this episode of the L&D Explorers Podcast, we had the pleasure of hosting Dr. Stella Lee, a well-known digital learning and ethics expert.

Dr. Lee is the Chief Learning Strategist and Director of Paradox Learning, where she brings her 20+ years of experience in AI and learning analytics to their elearning consultancy operation. 

Listen to this week’s captivating discussion on the importance of understanding AI's broader implications for learning and ethics, and the challenges that come with integrating AI. 

Mentioned in the video

Key Takeaways

1. Practical AI literacy 

We need to go beyond the conversation or the effort in trying out the tools. Right? Like learning how to use it is one thing, and there's no going around it. We need to try them. We need to have first-hand experience to understand how to use them.

(Timestamp 2:53- 3:08)

Actionable Insight: Implement educational programs to cultivate more hands-on AI literacy among employees. More than just its technical functionalities, foster a deeper comprehension of the application of AI tools and their limitations. 


2. Measure the impact of AI tools 

We tend to forget is measuring the impact, measuring the outcomes, understand what works, what doesn't work, what more needs to be done.

(Timestamp 10:31 - 10:44)

Actionable Insight: Establish metrics to understand the impact of AI tools on productivity, user satisfaction, and ethical considerations. Then use these insights to adjust strategies and training programs.

3. Phased AI integration 

Gradually introduce AI instead of just one big implementation of big tech. Again, pilot tests, introduce it gradually, iterate, get feedback, and adjust.

(Timestamp 18:46 - 19:02)

Actionable Insight: Integrate AI in phases to allow time for adjustment. This helps manage the change more effectively and reduces resistance, making it a smoother transition. Also, remember feedback cycles and adjustments based on employee input are crucial. 

We hope you enjoyed this first episode of our L&D Explorers Podcast! Subscribe to our YouTubePodbean or Spotify so you don’t miss out on the next episode. 

Customize your training

Create your own training resources with an easy-to-use course builder.

Start free

Transcript:
 

Dan Gorgone: Welcome back to the L&D Explorers Podcast from GoSkills.

On today's episode, we're talking with Dr. Stella Lee, the founder of Paradox Learning and an expert in digital learning and AI strategy. Our topic today is AI literacy and data ethics.

We'll discuss how L&D teams can better understand AI and recommend steps for adoption and communication. And we'll also talk about data ethics and building a culture of responsible AI use. I'm Dan Gorgon, course producer for GoSkills. I hope you enjoy this discussion.

Welcome back to the L&D Explorers Podcast. We are very happy to have Dr. Stella Lee joining us, the director of Paradox Learning. Stella, thanks for joining us today.

 

Stella Lee: Thanks, Dan.

 

Dan Gorgone: We're going to be talking about AI literacy in the workplace. With AI popping up everywhere in seemingly every tool, every platform, every app, AI is becoming more and more of an issue that we need to understand more baseline knowledge about that includes us as L&D professionals and also all the employees in our organization, people who may be tempted to use these tools for work purposes, who may be already using them without our knowledge.

So it's important for us to ensure that they are using these tools responsibly and ethically. So we're going to be talking about some of these things on this episode today. But let's start with AI literacy. Can you explain kind of in layman's terms, the terms like AI literacy and data ethics? And what do those really mean? And maybe what are some of the misconceptions that you come across?

 

Stella Lee: Even though the term AI seems to kind of pop up and become like the spotlight for the past year and a half, two years, it's actually been around for 70 years, at least. So it's just constantly evolving what the definition is of AI.

And with AI literacy, a lot of the focus in this past two years, people really reacting to it is, oh, wow, there's chat, GPT, there's copilot, there's Lama and Claude, and all these generative AI models that we can use. And so a lot of focus is on the tools itself.

You hear about that every other day. When people talk about AI, it's usually referring to a specific tool or a set of tools, and rightly so, because it's so new and people trying to figure out how to use them. But AI literacy, to me, is seeing the way currently things are. We need to go beyond the conversation or the effort in trying out the tools. Right? Like learning how to use it is one thing, and there's no going around it. We need to try them. We need to have first-hand experience to understand how to use them.

But at the same time recognizing that's not sufficient, especially, and not just for L&D. I think for everybody. We need to know, like, what are these tools good for? There are certain, what I call affordances that are naturally, you know, these tools are naturally good for using certain things or produce certain results. But there's also limitations. And talking about misconception, I really hate the term AI, by the way.

Artificial intelligence gives people this misconception that this is a being that's almost like God, like that you can do anything, but it's really just a collection of technologies, Right. And I think we need to understand, like, so what are the limitations on that? What are some challenges? What are some, like, you mentioned ethical issues.

So AI literacy is for us to very intentionally cultivating that knowledge, that skill in interacting, in using, in advocating for its uses and misuses and looking at it through a critical lens, but also a diverse lens from different perspectives. 

 

Dan Gorgone: AI is a complex issue right now because in this generation of tools, so many people, so many more people have access to AI-powered technology than ever before. And like we were saying in the intro, the AI tools have seeped into each and every app, and it's in Microsoft Office and it's in Google and it's on your phone and it's all over the place. So the opportunity to use these tools has greatly increased.

But as you were saying, we need to be able to, when it comes to work-related, professional-related use, there needs to be some kind of training, perhaps some kind of regulation, and certainly some kind of communication where depending on the company and depending on the industry, of course, and certainly depending on whoever your customers are, perhaps there needs to be some policies in place because the privacy issues, the legal issues, the data issues start to get into the data ethics part of it.

How can companies properly address these concerns with all of these tools out there? What are some things that they can do and maybe what are some examples of things that companies have done to address it?

 

Stella Lee: Yeah, just to get back to your early point about AI literacy, my intention is really to take the focus away from just the technical component of it because I think so much focus is on the technology. Right?

The technical aspect. But I think we need to look at the bigger picture.

Like you're talking about the ethical component, the policy, the regulations and compliance, and also from a societal perspective, how is it impacting us as a society? How is it impacting us as individuals and how is it impacting us, the way we work, the way we interact with each other. So that's the whole idea behind AI literacy.

So when it comes to how do we apply that or how do we actualize that at an organizational level, I think the first thing is, well, the first thing is it really needs to be a commitment to do it.

Coming from the top, I think that visibility needs to happen before anything else. You can say you're committed to it, but there's no action to follow through that you are invested in developing AI literacy and to close that AI skill gaps, nothing is going to happen.

So I think the first thing is the top-down commitment and able to communicate that commitment all the way through at every level. So that's the first thing.

And then secondly, it's to raise that awareness. How are you going to communicate that? And did you have a change management plan in place to help people? Because cultural change, I think that's huge. And these are the more intangible pieces that sometimes we forget and sometimes we don't think about it ahead of time. Change management sometimes, or communication plan sometimes comes to the end and it doesn't work. Right. Like we were chatting before, people don't want to change unless you give them a good reasons. Right. We need to understand it, we need to own it. We need to understand that we have a say. We are part of the process. We're being consulted. So I think that communication, transparency, awareness needs to happen as well. And I've seen that being done in some organizations and some so much, not so much, but I think we're still early. We're still early in this stage.

I think last year we spent most of last year reacting to AI. And this year I think there's a little bit more acting, there's more proactive approaches this year. So it's still early, I think, of course, after you communicate and show your commitment, it's really about what are different interventions, right. Like are they workshops Are they in a form of a resource hub? Are they in a form of more formalized training? Are they in a form of more informal and social learning?

You know, how do you foster that?

And also, very importantly, it's, are we going to pilot project, pilot test any of the tools that we've been talking about? It's good to get people up to a certain base level of knowledge and skills, but ultimately we need to test the tools at the place where they work of the specific job functions so people can see, okay, this tool, this is how it supports my work, my workflow, my tasks, my daily tasks, but also how do I work with it? How can I collaborate with AI? Or there are certain tasks, perhaps I don't have to do them anymore. Right. They could be automated. So we need to understand that at the place of where we work at the task level. And so I think to pilot that is the next step after the training has been implemented.

And of course, something very close to my heart that I think we tend to forget is measuring the impact, measuring the outcomes, understand what works, what doesn't work, what more needs to be done. So these are some of the steps that I think need to be in place before we can think about how we're going to use AI better and more effectively.

 

Dan Gorgone: That's great that you included that at the end because so many of us just want to just be like, I'm sure it's fine, we don't need to look at the dashboards. It felt right. It felt good, everyone, it felt good. All right, now this is great because I was going to ask, I was going to follow up with how you can deal with and how you can address how some people are going to have a natural resistance to the AI, to the adoption of it, to the just general kind of ethics behind it, because AI certainly has brought with it a lot of controversy.

Now, I just want to sum up, because I want to summarize what you said, because I think a lot of this helps to address it, starting with a top-down commitment, bringing awareness and having some change management in place to address what you're doing with AI, building and making accessible some training and resources for people doing a pilot program so you can get people get their hands dirty and they could try things out and see how it actually works and feel it personally. And then, of course, measuring the results.

A lot of this stuff right here should be able to, in theory, on paper, address any of the misconceptions and resistance that people have.
 

Stella Lee: –In theory

 

Dan Gorgone: But there are going to still be some people out there who do not want to, certainly they don't want to change, but they may also be fearful that in proving that these AI technologies work and that they work really well, that they're going to eliminate their own jobs by adopting these things. Where do we find the balance there and where can we assuage some of those fears that people are going to have, that AI is going to steal their jobs away?

 

Stella Lee: I mean, to be honest, we don't know. Right. It's too early to say that's, you know, when we talk about AI and jobs, it's the number one question, right? Is it going to take my jobs away? How is it going to change my work? How is it going to help me with my work? And I think we understand that change, it's happening and it's continued to happen.

But I think in terms of how do we address that with people, the first thing is, I would say, trying to identify the sentiments behind that the root cause, right? A lot of that, it's the fear of the unknown, and we all have that. We don't know what's going to happen. And nobody has an oracle to predict the future, but I think by addressing specific sentiment and cause, where's the fear coming from? Is it fear that they will be replaced? Or is it fear that they don't get any support for as things are changing? Or is it fear because there's also a lot of mistrust of technology? Like, is it going to be invading my privacy? Is it going to, am I going to be undermined or being marginalized?

Because the technology has built-in biases and judgment, which they do. So if it's the mistrust of tech, I think companies need to really put some policies in place. We really need to think about what are some best practices to build trust? Is it to ensure there's a responsible AI use policy? Who's going to be, I think accountability is a big topic for companies to adjust, to say, well, when something goes wrong, when there's bias, who's going to be, who's going to be collecting them, who's going to mitigate the risk? Like, who's going to pick up the slacks? Right.

So a lot of the organization haven't addressed that yet, or we're still in the middle of building these policies. So I think that would help as you communicate that to your staff to say, we aware this is a risk, we are doing everything we can to implement guardrails, to implement safety nets to mitigate the risk. So I think that would address some of the fear of the people that don't want to change. You have to understand, is it trust? Is it fear of the job being replaced? And if that is, what's your organization going to do about these people?

Do you have a plan? Do you have a reskill or upskill plan? Do you have a plan to help people that are being displaced? Did you train them to work with AI? Did you have alternative career pathways for them? I think the fear that the resistance is valid, and I think that happened with every iteration of technology, from the printing press to the steam engine to everything else that in between. I think it's valid. I mean, look at, look at, look at people that were driving horse carts. I mean, they didn't have a job when cars came around, and it's a valid concern So I don't think it's unreasonable.

But I also think we need to be careful and not generalize everything and put it in one package. We need to unpack that and understand how we can address all the concerns.

 

Dan Gorgone: And I think what you're talking about really gets to trying to build that culture that is accepting of AI tools, but also is supportive of the team and trying to not just, you know, with the policies and with the testing programs and with all the things that say that the L&D team can try to anticipate as they see different tools becoming available and new ways of training become available, creating that culture that supports it is going to be really important, 100%.
 

Stella Lee: And I've seen this over and over again in education technology, right.

Somebody or some team decided to implement this tech and consulted no one or consulted, you know, a selected, selective group of people that actually are not impacted by. So it often doesn't work because they'll pick a tool that doesn't have anything to do with learning, or it picks a tool that has terrible user experience that nobody can use it, or pick a tool that's not addressing the right problem. And so consultation and getting involved, like with the stakeholders, the right stakeholders is very important. And that's why people are resistant, because they see that change happening over and over again and they're not consulted, and then suddenly they're forced to use them and it's actually making their job more difficult.

So I think giving people control, involving people in decision-making process, building in that feedback loop throughout, I think that also helps. And it's just good change management practice, right? And also gradually introduce AI instead of just one big implementation of big tech. Again, pilot tests, introduce it gradually, iterate, get feedback, and adjust.
 

Dan Gorgone: So as companies move forward with the consideration of AI tools and training, the consideration of the ethical use of the tools is still going to be something that is going to be, I think, on a lot of people's minds, and it's something that companies need to take responsibility for.

Establishing some policies for the use of different tools, establishing policies and the ways that you create things that become public facing and certainly the things that maybe touch sensitive personal data, like taking all your customers' information and dropping it in for some machine learning stuff, understanding the repercussions of that, and if there are any issues there, understanding the ethics of how to use all these tools is going to be not just important, it’s necessary.

So as an L&D team, how can you prepare for this?

What are some of the ways that you can not just like kind of bake in ethics into the culture, but prepare yourself and protect yourself as an organization?

 

Stella Lee: Yeah. Can I first address why AI ethics is even more important than just regular tech ethics? So I think tech ethics in general, of course, we need to pay attention and really advocate for that, because technology is not neutral.

I think that's also a misconception. People think, oh, you know, it's just tech, you can use it whatever way you use it, but it's not like some human built it, right? Like people built it. And people have ideologies, people have to make decisions. But the fact that you're including certain features or functions, or excluding certain things, you're already making a value judgment on what goes into a tool, even like learning management system.

The fact that a learning management system is a tool that presumes things need to be in discrete learning pieces, right? That's a decision that somebody makes, and that's the limitation of the technology to shape the way you provide learning.

So same with AI. But I think AI is even more problematic, or ethics needs to be, we need to pay more attention to ethics is because we are ingesting a large amount of data sets that goes through this complex algorithm that we don't often have access to, to look inside and see what's happening. 

Sometimes you might have heard the word black box algorithm. It means it goes through this process that we often don't know what happened there, what decisions were made throughout this algorithmic process, and then the output. It's not something we can challenge or being explained to why it arrived at this particular conclusion.

Chat GPT is an example. If you ask ChatGPT, tell me here are what are the key ethical topics to discuss. It will give you like a list, but it also leave out others, but it doesn't tell you like why give you a list of eight bullet points and not 20 bullet points? Or why these particular age? Why is it more important than the others? It's not clear. And that is with AI technology, that we don't have any. Well, we having a bit more control and more insight now, but not sufficient. Still, it's getting better. There's a big push on explainable AI, meaning you would demand a machine to explain the decisions to you, to humans and also the ability to audit the process. But that's not baked into every AI tech right now.

So I think transparency and the ability to explain explainability, it's a big one for AI. Understand how the models, these models make decisions. Can we question them, can we challenge them, can we change them? But then, of course, it comes with that is like biases and fairness. Again, chat, GPT and other large language model ingest it. Basically, they scrape the whole Internet as data sets. But do we know what's there? No. And as we all know, the Internet is not neutral. There's a lot of biases, there's a lot of opinions, there's a lot of misinformation. So how good is the data set, how clean is it, how diverse is it? And you'd be scraping the Internet.

Are all the languages represented? Are all the viewpoints represented? What about large population that don't even have access to the Internet? Are they being other viewpoints being left behind? So I think these are really concerning topics.
 

Dan Gorgone: Definitely some scary things in there that you mentioned, those, some frightening things that I think some L&D professionals would listen to that and say, yeah, you know what, I'm going to just going to back away now from all the AI stuff. No, but I do think that there are so many pros and cons, there are so many arguments to be made on either side. But understanding is the only way that you can navigate through the discussions that you're going to have, inevitably, with the people who are on your team, with the leadership.

That's if you do take on some of these AI tools and you do make a commitment to it, like you said, you're going to need that top-down commitment and that understanding as well. 

 

Stella Lee: And by the way, it's not meant to scare people off about AI. I think, if anything, I want people to be more informed and going into it, understanding, what are some of the concerns?

How do we push back? Don't just accept things at face value.


Dan Gorgone: These game-changing technologies are going to push us out further, you know, into many different directions along many different axes, and with them come the risks and trade-offs and sacrifices sometimes that, you know, are going to change our industry, change our company, change the world. So being informed is the only way to really make the best decisions about the future of what you're doing, for sure. So with that being said, I do want to wrap up our discussion, but I do want people to be informed.

So with that being said, are there some resources that you can recommend so that the L&D professionals out there watching this can improve their own AI literacy and help others understand?

 

Stella Lee: Yeah, I actually, as we speak, I'm building an AI literacy course online in partnership with Athabasca University, which is an open university in Canada. We aim to launch it in September. And so along with that course, I've been putting a lot of resources out there in my blog and on LinkedIn and everywhere else.

But in terms of other people that I admire, I really like following up on Dr. Stephanie Moore on ethics, and she's at, I think, University of New Mexico. She puts a lot of information on data and tech ethics in general out there. The other person I like to follow on AI ethics is Dr. Maha Bali, B a l I. She's at American University at Cairo. She actually, I love her work because she looks at it from a non-Western perspective, and very much appreciate it.

I also like to follow Montreal AI Ethics Institute. All Tech is Human is another website. Algorithmic Justice League is another one that has a lot of news on AI ethics. And nothing beats trying out tools, right? So if you have never been, there's a site called AI for that.

It has hundreds of thousands of tools on AI. It's overwhelming, but it also gives you a glimpse of what AI can do. It's broken down into different job tasks that can help you. It also has a job impact index to break down if you're a pharmacist, these are the tasks you perform. And currently there's AI tool that can help with or replace these tasks. So that gives you a good glimpse of the future.So these are some resources I love.
 

Dan Gorgone: Well and of course we can learn more about you at paradoxlearning.com. That's your place. And then looking for you on LinkedIn. If people have questions, want to find you on social?


Stella Lee: Yeah, please.

That's where I usually announce any new pieces of content or talks or any AI news relating to L&D and education and also connect with me. Obviously, it's the best way to have a conversation. 

 

Dan Gorgone: Awesome. Well, thank you so much for joining us once again, Dr. Stella Lee from Paradox Learning. Thanks so much.

 

Stella Lee: Thank you for having me.

 

Dan Gorgone: Hey everyone, thanks for watching this episode of the L&D Explorers podcast. If you enjoyed it, please give it a like and subscribe because more episodes are on the way. And no matter what your learning and development goals are, GoSkills can help. Click the link in the description to find out more.

And thanks again for watching.

Customize your training

Create your own training resources with an easy-to-use course builder.

Start free
Laurel Buckley

Laurel Buckley

Laurel is a writer at GoSkills. She also enjoys writing on travel and culture and is always studying a new language.