twitter-square facebook-square linkedin-square info tag download trakstar-mark

Mindflash is now Learn, part of the new Trakstar Trifecta alongside Hire and Perform

Learn more about how the Trakstar platform is revolutionizing talent management through integrated, flexible solutions.

Trakstar

How FocusAssist Works

— by David Kaminsky

In a live training environment, generally a real life trainer works with one or more trainees to educate them on the training material. One benefit of this is that the trainer has a good sense of a trainee's level of engagement. For instance, if a trainee were to fall asleep or be playing with their phone and not paying attention it would be quite obvious in a live training session. Conversely, trainees can show their positive engagement by asking or answering questions in a live training environment. In online training getting this feedback is much trickier.

FocusAssist is a feature we've developed at Mindflash to better measure and ensure trainee engagement. It's a way to help trainers get some of that engagement information while still being able to deliver their course material online, asynchronously. It uses the camera in a trainee's iPad or computer to determine if the trainee is present and their level of engagement. While FocusAssist does use the trainee's camera, no video or photographs are saved or streamed anywhere. It simply uses each frame from the camera to determine the user's level of engagement and throws the image away.

In this post I'll talk about how we implemented FocusAssist and where we can go from here. For the implementation of FocusAssist we use a third-party framework for the computer vision piece of the feature which finds and tracks the user's face and eyes. The work we've done to implement FocusAssist ties into this framework to have Mindflash's trainee application respond to the computer vision data intelligently.

A Note on Privacy and Security

We take privacy and security seriously, especially with this feature given its use of the trainee's camera. The camera input used for FocusAssist is not stored or streamed anywhere. It's only used for processing on the trainee's client to determine information about the trainee's presence and level of engagement. Even this information on engagement level is not identifiable. We never give an individual trainee's engagement score, but instead only show aggregates once we have engagement scores for five or more trainees.

FocusAssist currently operates in two modes:

  • Measure and require engagement
  • Measure engagement

Requiring Engagement

Requiring engagement is the proverbial stick FocusAssist enables, but it definitely has it's uses. There are many industries such as healthcare or government in which the organization training an individual absolutely needs to know that the trainee has seen all of the training materials. Previously this meant that training in these areas would need to be done in a live class so that the trainer could mark attendance and ensure the trainees were paying proper attention. With FocusAssist's require engagement functionality this is no longer the case.

When a Mindflash course has been setup with FocusAssist to require engagement, the course requires that the trainee has an iPad or computer with a camera. The camera must be focused on the trainee at all times. If the camera cannot find the trainee then a modal pop-up will appear over the training material alerting the trainee that the camera cannot see his face. If it finds the trainee but determines the trainee does not meet or exceed the threshold at which it believes the trainee is engaged then it will pause any media in the training material until it can find the trainee and determine he is engaged.

Once FocusAssist can find the trainee and determine he is engaged again the media will automatically resume. When this setting is on, FocusAssist ensures that the trainee sees the training material in full and trainees are not able to skip ahead in the course content. While this is certainly heavy handed it provides a great substitute for a live training environment, allowing trainees to take the course at their convenience.

Measuring Engagement

As a trainee is taking a FocusAssist enabled course, every quarter-second it is checking the trainee's engagement level. When it does this check it is looking for two primary pieces of information:

  • Do we see a face?
  • What is the engagement score?

If we can't find a face we ignore the data for the purpose of measurement. The reason for this is that it could just as easily be meaningless as it could be meaningful. If we can't find a face, it could mean the trainee walked away out of boredom but it could also mean that the trainee left the room to use the bathroom or answer the door.

If we can find a face then we utilize the data and factor it into the engagement score. We calculate a separate engagement score for each trainee on each slide in a course. We then aggregate the scores for each slide for all trainees. In our trainer reports, if a slide has an engagement score for 5 or more trainees then the report will show the aggregate to the trainer. If it has fewer than 5, the report informs the trainer that not enough data has been collected. This ensures that trainees cannot be identified by their engagement score.

 

Block diagram showing how application responds to input from computer vision framework. Block diagram showing how application responds to input from computer vision framework.

 

Looking Ahead

At present our algorithms for calculating and aggregating engagement scores are simple. In the future we hope to refine them to make the scores even more meaningful. For example, for a slide in which we know the duration of the content (i.e. a 3 minute video) we could drop the trainee's score if we know that the trainee only watched 30 seconds of the slide. For all slide types we could do a better job of throwing out outliers. For example, if 10 users were on a slide for 30+ seconds, perhaps we could throw out the score of a user who stayed on the slide for 1 second. The goal of these improvements would be make the score more valuable to the trainer.

The end goal of all of this feedback is to help the trainer improve their course material. This then helps future trainees learn and enjoy the training material better. At Mindflash we love thinking of new ways to improve the online training experience. If you have any thoughts on how to improve upon these ideas or others leave a comment or contact us.

Share this on

Who is Trakstar?

Trakstar is a multi-product HR software provider helping organizations put the people back in people management. Develop and align your staff through better recruiting and applicant tracking, performance management, and learning management. For a more integrated solution to talent management, check out our website and request a live demonstration today.