See no evil

In this research study, strategy consultants who used GPT-4 as a tool to assist them with “inside the frontier” tasks (within the capability of AI) performed significantly better than their counterparts did in terms of quality and productivity.

Hence for such tasks, the authors called AI a quality and productivity booster.

However, those who used GPT-4 for “outside the frontier” tasks (beyond the capability of AI without extensive human guidance) performed significantly worse than their counterparts did in terms of correctness.

Hence for such tasks, the authors called AI a quality disruptor. (Though I’d be more inclined to call it an accuracy disruptor, as the “quality” of the users’ work was superior regardless of correctness.)

In the words of the authors, “Professionals who had a negative performance when using AI tended to blindly adopt its output and interrogate it less”. My inference is that the inside-the-frontier users may have behaved similarly, but because the AI was up to the task they got away with it. For something more complicated, they came unstuck.

Stylised illustration of a blindfolded businessman working on his computer.

OK, but that was just an experiment. Could it happen in real life? You bet.

“I now realise that AI can generate authoritative-sounding output that can be incorrect, incomplete or biased” wrote a professor at Macquarie University on behalf of a group of academics who made a false submission to a parliamentary inquiry!

Given the frontier of AI capability is ever shifting, we’ll never be certain at any point in time whether a given task lay inside or outside it. So as users of the technology we need to maintain a critical mindset.

In other words, use artificial intelligence to augment your human intelligence, rather than replace it.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.