AI tools can help you prepare for CASPer — but the feedback they give you may not reflect how your responses are actually scored.
It makes sense. AI tools are free, available at any time, and will respond to whatever scenario you throw at them. For applicants trying to fit CASPer preparation around a busy schedule, the appeal is obvious.
The problem isn't using AI at all — it's treating the feedback it gives you as an accurate reflection of how a real evaluator would score your response. Those are two very different things, and confusing them is one of the most common preparation mistakes.
The distinction matters because the two things AI is useful for and the one thing it isn't are not always obvious.
This is worth addressing directly, because it comes up a lot. Many applicants ask an AI tool to score their response and receive a quartile in return — something like "this response would likely place you in the 3rd quartile."
That number is not meaningful. CASPer quartiles are determined by comparing your scores to everyone else sitting the test in the same cycle. No AI tool has access to that data. The quartile it gives you is a guess dressed up as a result, and treating it as reliable feedback can give you a false sense of where you actually stand.
AI tools are trained to recognise good writing. CASPer evaluators are trained to assess something different — the quality of your reasoning, the depth of your empathy, and whether you've genuinely engaged with every perspective in the scenario.
A response can be well-written and still score poorly if it considers only one perspective, describes actions without explaining the reasoning behind them, or jumps to a solution before acknowledging the situation. AI is unlikely to flag any of these issues, because they don't affect the surface quality of the writing.
The reverse is also true. A response with minor typos or informal phrasing can score highly if the thinking behind it is strong. Spelling and formatting do not affect CASPer scores — content does. AI feedback that focuses on those things is training you to optimise for the wrong things.
None of this means avoiding AI entirely. It means using it for what it's actually good at — and being clear-eyed about where it falls short.
Use AI to generate scenarios. Give yourself a time limit, write your response, and then review it yourself against the question types and framework you've learned. Ask yourself whether you acknowledged every perspective, whether you explained your reasoning, and whether your response went beyond describing what you'd do to explore why it matters. That self-review process is more valuable than any AI-generated score.
For feedback that reflects how responses are actually evaluated, look for resources built on real evaluator experience — not general writing quality. The criteria CASPer uses are specific, and preparing against the right criteria is what makes the difference.
Try a free timed scenario and get feedback on your reasoning, empathy, and perspective-taking — developed from assessing thousands of real CASPer responses.
Start Free Practice →No credit card needed
AI can be useful for generating practice scenarios and getting familiar with the format. Where it falls short is feedback — AI tools don't know the actual scoring criteria and tend to reward well-written responses regardless of whether they demonstrate what evaluators are looking for.
No. CASPer quartiles are determined by comparing your scores to everyone else sitting the test in the same cycle. No AI tool has access to that data, so any quartile it gives you is not meaningful.
AI is most useful for generating practice scenarios so you can build familiarity with the format and practice thinking under time pressure. Use it to increase the volume of your practice — but look elsewhere for feedback on the quality of your responses.