What AI Really Does Not Do During An Online Test

AI Is Supplemental To Humans In Live Proctoring
ktsdesign/Shutterstock.com
Summary: AI used in live proctoring functions like a smoke detector alerting humans to a possible problem, and then, humans make the decisions, not the technology.

AI Is Supplemental To Humans In Live Proctoring

Artificial Intelligence—AI. It may sound scary. And it’s anything but perfect.

So, when people talk about using AI as part of test and exam proctoring, as part of the system that deters and detects cheating, people get spooked. That’s fair. But it’s also limiting, and misses the depth of what AI tools do—and better yet, do not do during the monitoring of remote tests.

Let’s take a quick look.

At the outset, it’s helpful to frame questions about AI in test proctoring with an understanding that what AI tools and techniques are used during a test can vary greatly. The scale can be from no AI tools whatsoever to a robust buffet of indicators that can accurately monitor and analyze everything from background noise to keystroke accuracy and speed and more. Just because a remote test has a proctoring solution does not mean it uses AI.

It’s important to also keep in mind that where a specific test is in the range of review and monitoring methods, depends in large measure on what the test-provider, the school, wants. Remote test proctors follow the rules and procedures the schools or professors set, they don’t make up their own or use anti-cheating tools that the schools don’t want. In other words, if a particular test is using AI, it’s certainly because the school asked for it, and likely for good reason.

Understanding AI: What It Is And Does

Understanding that AI and test proctoring are not the same thing and that schools, not proctoring companies, make those calls, let’s look quickly at what AI is—what it does.

AI is an information gathering and assessment engine. In that way, it’s just like an exam itself, collecting information and scoring it along a set scale. What makes AI different is that it can “learn” from what it gets wrong and what it gets right. AI systems get more and more accurate with use.

That gets us to what AI does not do during remote testing. It does not—and I cannot underscore this enough—decide who is cheating. It does not “flag” some behaviors as cheating and fail a student. It does not have a set score of cheating versus not cheating—look off your screen twice in a minute and you’re fine, do it three times and you fail. It simply does not do that.

Here’s why: the AI systems used in the test proctoring that we use simply alert humans. Humans make the call. Maybe not every proctoring provider does it that way, but they should.

Let me give an example. If a student is answering complex engineering questions very quickly, faster than 99.5% of all other students, that may be suspicious. It may indicate that they had the questions and answers in advance. It may also indicate that the test-taker is just a well-prepared engineering genius. In this example, an AI system might alert a test proctor to the unusual event but the proctor, and ultimately the test-taker’s professor, will determine whether they are a savant or a scallywag.

That last bit is very important. Even if this example student trips the AI alarm on how quickly they answer, and even if a reviewing proctor then alerts the professor, the professor may decide it’s fine. Maybe they know the student is, in fact, an academic star. Or they may decide, considering that the student has never attended a single class, that further inquiry is needed. The point is that professors and school staff decide what is or is not misconduct, they decide what to do about it—not AI.

In this way, AI is like a smoke detector in your home. Smarter, yes, but it serves a very similar function. A smoke detector spends all day and all night looking for one thing, when it finds it, an alarm is sounded. But a human then has to decide whether someone left the meatloaf in the oven too long or whether it’s time to gather the kids and dogs and get out. Like a smoke detector, AI can alert people to things they may miss. But people do the deciding.

Further, because AI “learns” from what it gets “right,” the tools will make fewer errors, sound false alarms less frequently as it does the job. Where old AI systems may have registered an unusual event when someone sneezed, after a few times of being “corrected” by humans, the system will correct itself and no longer highlight sniffling. That’s a good thing. We want AI systems—all systems, in fact—to be accurate and get better.

And when you put real people with AI, the synergy can be quite robust. Humans can correct and improve the AI and AI can alert humans to things they may miss. Over the course of the collaboration, humans get better and so does the AI. The dirty secret is that, especially in online proctoring, AI is used just as often to help our proctors get better, as it is used to “catch” cheating.

In proctoring and assessment, there is no “score” by which an AI system will determine cheating. And there is absolutely no system in which any such “AI score” will determine a grade or academic outcome. If schools or professors are trying to use proctoring or AI tools in that way, without human decision-making, they should not. The systems were not designed to do that and likely can’t.

Conclusion

The bottom line is that the AI systems used during online test proctoring are not ubiquitous, they are often very focused and always only supplemental to humans, not in place of them. While they can detect things that humans may miss, Robocop isn’t watching anyone take a test. Big Brother cannot fail you. AI simply does not do that.