I was curious to know who the first person was to say, “Happy New Year.” I reached out to GPT-4 and learned that nobody really knows (bummer), and that it was celebrated over 2,000 years ago, but nobody said that wordage, and that it was tied to the Gregorian calendar in the 16th century. Considering that 2,000 years ago, the Gregorian calendar didn’t exist, for all we know, New Year’s Day could have been March the 5th. I bring all of this up, because when doing a forecast, it isn’t an exact science. It is based on trends (from my perspective) and what you see, moving forward in that year. Experience plays a role, and just seeing all the “data” screaming this is what this is, and saying to yourself, why isn’t anyone else seeing the same thing?
Some forecasts are pretty simple to figure out. Others not so much. In 2023, I never saw generative AI coming – nor did the majority of folks who forecast whatever. Machine Learning was well-known (it’s another form of AI), so the idea that generative AI, in this case, ChatGPT would take off (even though Gen AI was already out there), definitely a surprise.
What didn’t surprise me though, once it started to catch everyone’s attention, was that vendors in the learning system and learning tech space, would grab hold of it. I figured out early on, where the low hanging fruit was for vendors who wanted to add it. Course creation tools (aka built-in authoring tools) was pretty obvious, assessment, and even skills based on X factor or factors. The first two rolled out quickly – for those who dove in. The skills is very early, and the other one, that I thought would zing out would be around the creation of learning paths/learning journeys.
For Learning Tech, solutions such as Lucy.ai (which was around prior to 2023) didn’t surprise me. While I see a lot of pluses it is still missing some items. And the idea that it is private generation ai, is a misnomer. Once you link out to say LinkedIn (which Lucy.ai has a partnership with), the whole private thing goes out the window. Nevertheless, I did find Lucy.ai as a solid solution.
Bongo Learn rumbled out with their ai coaching offering, and then improved it by tying the generative ai to skills validation/assessments, coaching, and other items with data presented. The moment I saw it, I was “this is a winner,” – which is why it was named the number one Learning Tech solution for 2023.
Forecast 1
Generative ai explodes in the learning system space (and learning tech for that matter, but I’m focusing here on learning systems). There were vendors in 2023, who are typically known for slogging when it comes to adding NexGen features quickly. Let alone, adding any features. Yet, those vendors rumbled with generative ai, and I’d argue with mixed results. Some useful, others, been there, seen it already.
I’d estimate that less than 1% of the total learning system space, rolled out with some level of generative ai in 2023. Thus, my forecast is that nearly 40% of the entire space will have generative ai by the end of the year. I’m not including open source 100% freebies in the list though – simply because a 3rd party dev shop could add gen ai, when the open source itself doesn’t offer it.
Why the high number? It is essential with any system – first, there is a demand in the market – due to hype, and messaging that this could be a game changer – secondly, while the usual statement of “our clients aren’t asking for it OR none of our prospects are asking for it,” won’t apply – due to perception that if you do not have it, you are behind, and thus not forward thinking. No vendor wants to be seen as being behind the times, or archaic in their thinking. Who wants to purchase that?
Thirdly, gen ai is being added by companies, organizations and businesses – the same folks who are clients or will be for a vendor. It doesn’t look well, when they have it, and the vendor doesn’t.
Lastly, if I am a competitor, I would message I have it, and market it to the tilt – and if my competitors don’t, then I would compare and note that – in a nice marketing way, tapping into neuromarketing and propaganda techniques (both of which work).
No, there is too much there, to ignore. Hence the high percentage. Now, will it be good? Will it be universal in what vendors do with it? Will it make sense to add it here vs there? The answer across the board is highly unlikely. Some vendors will get to the next stage, others will be starting out, and even others who already launched, will be required – in fact, necessarily to do more – after all, they started in 2023.
There will be vendors who hype and use certain words to enhance what they offer – I see this already with one vendor who notes they have copilots in their system. Sounds great, but is it fully accurate? No.
Another vendor mentioned they were the first AI learning system (actually a couple of vendors mentioned this too). Was the one who heavily pushed the messaging truly #1? No. Neither were the others by the way – but what works, works. And then there was a vendor messaging they have Skills tied to AI already, but never noted that in fact, they didn’t yet – in 2023, rather it was rolling out in Q1 2024.
Thus, you can expect plenty of this, wild west approach in 2024. Unless you fully understand what a copilot is, what is the vendor referring to regarding AI – it could just be machine learning, how is gen ai being used in the system – show me, and find out if it is actually live, and so forth, you may experience a surprising number of vendors who say one thing, but reality is something else.
For me, I would have been a vendor who waited until 2024 to start. There are so many LLMs (learning language models – the foundation for gen ai), selecting one without fully exploring the entire market – beyond the big names – could be a mistake. Plus, I would select two LLMs, not just one.
I do believe that within this forecast, the majority of vendors will go with Open AI – which already is the #1 choice by vendors who rolled out in 2023. Even those who went Azure AI – it is really Open AI – and specifically here, ChatGPT 3.5 Turbo. GPT-4 has more capabilities, than 3.5 but the token fees are higher. GPT 4 Turbo (it’s often mentioned as 4.5) is not yet available (as of 1-2-2004) for commercial use.
The vendors I see in play beyond say ChatGPT, again by OpenAI are the following. Yu can go the fine-tuning route and access them as an API. For example, thru Azure AI studio, you can do this with Llama 2, and many other LLMs. To use Amazon Bedrock, or Q, or Titan (yuck) you must have AWS.
- Amazon Bedrock – It’s a model as a service – already has several LLMs available - I will either be using Bedrock for FindAnLMS (yes, generative ai is coming to my platform, and FindContent.io – which re-launches in Feb) or direct with AmazonQ. One vendor is using Amazon Bedrock – i.e. a learning system vendor.
- AmazonQ – Just launched for business – legit threat and competitor to ChatGPT, as in GPT-4 and even 4.5 - Definitely worth a check it out - Pricing from a business standpoint (i.e. for your company) is a bit funky.
- Anthropic – Claude and Claude Pro – Already one vendor in the content space uses Claude – they love it
- Stability AI – It has vastly improved since the initial first release. CYPHER LEARNING (they prefer their name in CAPS) uses them along with OpenAI’s GPT-4.
- Google Cloud – I’d recommend going with their Model as a service, which enables you to pick from a significant number of LLMs – you have to use Vertex AI as the base – it’s machine learning. They call their model as a service – modal.
- Lama 2- A well-known learning system vendor is using them – they like it. Personally, I’m leery of anything Meta rolls out. Yeah, it’s free, but this is Meta. Llama 3 which is in the works is expected to be a legit competitor against OpenAI ChatGPT.
Others to consider – intriguing, but a lot of vendors haven’t heard of all of them
- A21 Studio – Jurassic-2 - it’s a text-based gen ai
- Microsoft Phi-2 – An LLM that can run on a mobile device or even a laptop. Similar LLMs are referred to as small LLLMs – Offers quite a number of parameters.
- Mistral 8x7B – Open source LLM, that reportedly in performance benchmarks scored higher than Llama-2 and ChatGPT 3.5 - Numerous folks note that Mistral is a Unicorn.
- Google Gemini – Initially big fan, lots of potential, but the initial release is underwhelming – it is now the foundation for Bard
- Ernie Bot – From Baidu – Reportedly outperforms GPT 4. Another one is Wenxin. But Ernie Bot is the player here. The downside, unless you can read Chinese, you will be out of luck. It does have some English, but it was trained on Chinese text.
- Falcon 180B (from Tii) – 180 billion parameters, and 3.5 million tokens. Open-source model. Falcon is in Bedrock – just an fyi.
I note all of the above, because the forecast #2, will continue to show the driver in our industry, being OpenAI, and whichever version learning system vendors (even learning tech – Lucy.ai for example, uses OpenAI), go this route. One reason I have heard is that OpenAI is easy to work with, structures good pricing and works well for what the vendors need.
Azure AI will be right in there, but again it’s really the Open AI GPT versions. I do hope that folks really take a dive with Bedrock, but I just believe the foothold with GPT 3.5 and 4 is strong to retain.
Forecast 3
Yep, still on generative ai (it’s the dominate player here for feature sets). Here is what I see as the key capabilities that vendors will use (or at least rollout for 2024, to begin with)
- Content creator tool (they may call it a course builder or authoring tool) – it is already the most popular of the rollouts in 2023. From what I hear, a lot of vendors are eyeing this as the feature to use with gen ai. It will dominate. Fun fact though – with the exception of one vendor, everyone else is publishing text only. Some offer a WYSWIG window, where you can add video, audio, YouTube links kind of stuff. Boring IMO. The vendor who has done well with it, offers layers, synthetics such as video and audio. Synthetic means it’s ai created. Oh, with the content creator, it will generate chapters or modules (depending on your terminology), content – text and so forth. You can do it via questions (one example depending on the vendor) or by typing in a specific skill or wordage.
- Assessment tool – Number two on the hit list. So far, I have been underwhelmed here.
- Skills – Yep, the gen ai will generate the skills you need – and the strength or weakness. It has a lot of potential, but I think a lot of vendors will miss out on this. If you can tie it to content and ensure that it is accurate, it could go well
- Creates a learning path or learning journey – I’ve seen a couple of vendors do this. Again, it is text driven – and in one case, you have to add the assets, rather than the gen ai being able to do it.
- With the above skills, I’ve already seen via a recruiting module whereas it pulls the information from a resume uploaded. But is that really learning? Not in my opinion
- AI coaching or mentoring – The key here is that you should have a human coach following the AI results. An AI coach though, is possible and in one research study they found that an AI coach was just as effective if not better than a human – however it is totally based on what is the topic and questions around it.
There is one vendor who plans on developing personal agents that learners can select from to help them, learn. Will it roll out in 2024? They say it will, but while their approach is to have the personal agents – autonomous agents – an ideal for folks to use – it will sit on another platform and then folks pick the one(s) they want. My preference is to have the agents in the system itself. A personal agent – is in essence a copilot. That’s the simplest way to think about them.
Vendors may start to rollout gen ai on the learner side. I expect to see some do this. The risk? High token fees, depending on what gen ai LLM they are using, and number of users. While you may think a half a cent isn’t that expensive, wait until you have 25,000 people entering questions and other text items into the context window (folks will say prompt, but it actually is context). If you can add a PDF or another file, or generate synthetic materials, hello pricing. As of right now, fees are quite low with a vendor, because it is on the admin side, and course creation and assessment tools are not that expensive.
Pricing for gen ai? There are three approaches – one – is include it for free – there are vendors who offer it. However, each of them have said, they will need to watch around usage, and cost to them – the vendor – to decide if they need to charge. Other vendors charge it as an add-on – a flat price. I know of two vendors who is going with the token credits approach, whereas you buy a set number of tokens, and then your end users, well, use them, plus whatever the admin does.
Which will end up the most popular? Hard to say right now, but free I think won’t be the option here – especially when the costs of usage based on users and what they enter in the context window goes up. The problem here is that folks tend to start off by asking ambiguous questions or information, rather than going very specific. Thus, the start high then narrow – which increases token fees – because a token fee is based on the number of characters.
The token credit option, I think could work – as long as the client fully understands how it works. Think a bucket of tokens. The add-on though, will likely be the leader here.
Forecast 4
Advanced analytics – I call them learning intelligence – will increase. The downside? The vendors who offer it, do so as an add-on (you likely will see it as just a line item in the proposal). The problem with learning intelligence data out of the box – i.e. it comes with the system, overwhelmingly is awful. You couldn’t figure out your learning story. Some vendors include a built-in BI tool, as the metrics. The downer here is that this is a business intelligence angle, not really a learning intelligence angle. Sure, you can generate a lot of metrics, but it can get confusing, and you really need to know what is relevant and what isn’t.
Forecast 5
Refreshed UI/UX. This is definitely on the plate. The downside? Vendors focus on the learner side. Great, and yes, you need to, but the folks who are in the system the most? Administrators. I see way too many systems who haven’t done an update in years. And I’m talking years.
Forecast 6
LXP hype continues. Ignore the hype. Nearly everyone pitches LXP, but there are only a handful that are legit LXPs. And any vendor who says that an LXP is the content angle, whereas the LMS is about course management – clearly has no clue on an LMS, and why it was built and its structure. It is frustrating for me, to see how many vendors never did their homework or background understanding.
Forecast 7
Skills continues to be hot, especially around content tied, and upskilling. The problem here? Reskilling has to become number one, because gen ai is going to eliminate jobs. Sure, there will be new jobs – hence the reskilling aspect. The latest to get hit – or projected to – on job loss? Accounting.
Equally data is pointing that the people that will be impacted the most with job loss around gen ai? Mid-managers.
Thus, reskilling again, has to be the priority. However, I believe the vast number of clients – i.e. the folks who oversee L&D, HR, or Internal Training or whatever department are not thinking that way. Which is going to be a issue. Think this way, Fred is eyeing a marketing role – or the system eyes it that way. However, that marketing position will be eliminated due to gen ai. What will you do now? This is why, the people overseeing L&D, HR, Training, etc. must stay current with what roles will be eliminated. It isn’t as simple as you think.
Let’s not forget that gen ai, and ai in general is still at tiny, miniscule level. And that if you are relying on it solely for productivity without a human, that is a big no no. Yet, companies are seeing it that way, forgetting that all LLMs produce hallucinations (fake or false information). How many folks in L&D, Training, HR, or other departments are aware of this? How many of your employees are aware of this?
My guess, not a lot. So, a cut and paste seem simple, but what if that productivity or task impacts the company, say financially data? Not realizing that the info could be false, and then publishing up the food chain, whereas those folks are unaware is a legit risk.
Bottom Line
Generative ai – will be the number one capability for 2024. Nothing is even close. It will go across many areas of the system, and yet, one of the biggest low hanging fruit with it, isn’t even on vendor’s radar (it is for me).
This is why I write a lot around generative ai. Your employees will have no real idea on its impact and what they need to realize and do. This is directly do to no one in L&D, HR, Internal Training or other departments providing online content/courses that specify exactly. If you are not pushing out the content, heck in this case, making it required (yes, I said it), and getting the metrics behind it, you are going to regret it. The company will regret it. Generative AI, in any Enterprise isn’t a freebie with token costs. And as someone in a department overseeing an Learning System, and the training and learning too, it isn’t as easy as to prepare as you might think.
The learning system vendors, for the most part, will ignore the RED ALERTS that are needed in the system – such as clearly letting learners and admins know that gen ai may produce fake or false information and that you must always check to verify it is correct. So far, I’ve seen only two systems do this, and one, has the text so small, you need a microscope to read it.
No one, has added context windows on the admin side, where the person can identify if the information is correct or not. Gen AI is based upon patterns and training itself. Thus, unless the person (even on the learner side) can verify the information is correct, the system – i.e the LLM will think it is so and continue to build upon that. Think how that will play with your learning, for skills or content or assessments or coaching or cohorts and so on.
This is simple 101 stuff here. Which begs the question – why hasn’t anyone done it in their system?
I suspect it is like the person who started “Happy New Year.”
They had no idea they would be the first one to do so, nor did they expect it to take off, more than 400 years later. If they knew that there may be issues with it, or that if everyone thinks that is the right and appropriate terminology to use, I wonder if they would have said it?
Perhaps, perhaps not.
I mean, we will never know who that person was.
And maybe that is the secret to saying Happy New Year,
Even if they thought the new year was
March 1st.
E-Learning 24/7