How AI is changing job interviews

I was recently involved in interviewing candidates for an actuarial position. The interview was conducted online via video conferencing as we were all in different locations, which has been the norm for a few years now. It was for a position that required a small amount of experience, not an entry-level role. One particular interview was a disaster.

Many parts of the interview were dissatisfying for a variety of reasons, but overall it was disappointing because it became apparent that this candidate was using Chat GPT to generate answers to at least some (but probably all) of the questions during the interview. I suspected this because their mannerisms were robotic and the answers they were giving throughout were the sort of extensive but bland lists that Chat GPT is capable of creating.

Their technique was to repeat the question every time. This is a well-known technique for giving yourself time to understand a question and formulate a more considered response, but in this case I suspected it was being used to give them time to type in the question, or so someone else could hear and type it in on their behalf.

The style of the candidate’s responses varied over the course of answering each question, which was also suspicious. They would start with a slower and more vague (almost distracted) answer, a tactic necessitated no doubt by having to give their own answer to begin with or waffle for a while until chat GPT could finish returning the results. Once the results were ready they’d reel off the full list with no pauses or time to reflect on what they’re saying or how it relates to the question we asked.

It’s not even uncertain that they were cheating, as a quick check on Chat GPT after the interview returned practically word for word, and in the same order, the spiel this candidate treated us to. They apparently didn’t even try to prompt any of the answers to be more human-like before the interview; clearly unaware of the fine art and importance of prompt engineering.

One of the questions required them to think about the effect of a scenario on part of the insurance industry, and Chat GPT helpfully listed all of the types of covers affected under this scenario but curiously not one of the largest, motor insurance. And guess what? The candidate didn’t mention motor insurance either.

Instead, when we prompted them to consider some of the other more obvious lines they reeled off the remaining lines of business Chat GPT mentioned in the dregs of its answer, including obscure trade credit insurance, which someone with limited experience in insurance would probably not think of unaided.

If they had just thought about it they might have found the better answer themselves. As well as being a comprehensive waste of everyone’s time it’s also so disingenuous to the interviewers and other candidates. Is the only option to return to fully in-person interviews again?

Large language models like Chat GPT are useful as tools for generating ideas, particularly when used to help candidates thoroughly prepare for interviews, anticipate questions and rehearse answers, but when it’s used to directly generate content without being edited and contextualised it’s a problem. Instead, its use during the interview provided us with an hour and a half of context-free, frustratingly insipid corporate verbiage that was excessive and unconvincing.

In their defence they may have been nervous and used it as a crutch, or felt that their own experience and knowledge would not have been sufficient to get the job. But even if that was the case, by the mere act of failing honestly they would still have learned something about what they need to do to prepare for interviews and what skills and knowledge they need to develop before trying to get a job like the one they applied for. In this case unfortunately they learn nothing, and most likely think that the same method is worth repeating.

Are there any questions we could ask candidates that an AI would not be able to produce an answer for? Or where it fails so obviously that nobody sensible would try to pass it off as their own words? Insurance is an industry where much of the knowledge is hidden and proprietary, it’s not on the internet easily searchable (and not scraped by AI companies for training data), so to those with experience it can be clear whether someone knows what they’re talking about.

There are some questions that AI platforms are currently poor at answering: maths and word puzzles come to mind. Maybe we need questions that can’t be typed into Chat GPT. For example, show them a table or chart and get them to analyse and draw conclusions from it. Very hard to put into AI during an interview without some preparation, though screenshots of data will likely be easy to input on better AI platforms soon as the technology and ease of access improves.

There are of course physical limitations we could put in place to reduce cheating, like getting candidates to show us their room on camera, so we know they have no assistance. This is increasingly common for online exams, but it won’t eliminate the AI problem entirely.

Alternatively the questions need to be much more specific and personal to the candidate so that there is then a strong incentive to simply answer from personal experience, and when the answer is too generic we can push for more details of how it related to them and their situation.

This approach is also likely to result in better interviews. The proliferation of AI may mean we will all be strongly incentivised to improve our interviewing techniques, to the benefit of candidates as well as employers.

Let me know in the comments if you’ve encountered this. How can we have better interviews in a world where AI can do the thinking for someone? Thanks for reading.

Leave a comment