Rethinking Assessment in the Age of AI
I was recently in Toronto catching up with my friend Andrew, a new PhD graduate and TA at the University of Toronto. As we talked, he mentioned he was in the middle of grading a massive stack of undergraduate papers. What he said next has stuck with me ever since.
A year ago, Andrew would occasionally encounter an essay that was clearly AI-generated. Those students would often fail the assignment and be confronted about academic integrity. Today, he operates under the assumption that every paper submitted is AI-generated. The department has stopped calling out the behavior altogether; they simply grade the work as it is.
This shift indicates a fundamental, systemic breakdown. When only a few students cheat, they are the ones losing out on the value of their education. But when everyone is doing it, the very purpose of the assignment has been lost. The educational contract—that hard work equals learning—is broken.
Why students are reaching for the AI crutch
From the student’s perspective, this isn’t necessarily a moral failing; it’s a rational choice to complete a perceived chore. They don’t see the essay as an opportunity to apply their knowledge or develop critical thinking, but as the next hurdle in the way. The proposition of having that chore done quickly and for free is too good to pass up.
This sentiment is often amplified by a lack of long-term perspective and a cynicism toward the future. If the job market is hopeless, and AI is going to replace roles that rely on these “traditional” thinking and writing skills anyway, why bother?
The underlying problem is clear: if the method of assessment is so easily bypassed, and students don’t value the thinking involved, nobody is learning anything.
The essay is dead đź’€
We have to face the inevitable reality: the traditional, long-form, take-home essay is no longer a viable way to judge a student’s understanding in the age of generative AI. Educational institutions and teachers everywhere are rightly grappling with this fact.
We aren’t arguing that the skills are irrelevant. A sincerely written essay is still one of the best tools for deep reflection, synthesizing complex ideas, and receiving teacher feedback. These are the skills that must be preserved.
The challenge is to make “faking it” impossible or impractical. Instead of take-home assignments that an AI can ghostwrite overnight, we should prioritize live application. By focusing on how students use knowledge in real-time, we ensure the grade actually reflects the person, not the software.
The new assessments must be:
- Interactive and time-sensitive
- Focused on a short, fast feedback loop
- Impractical to cheat by copying/pasting from a chatbot
Reclaiming learning through interactive design
Knoword is a good example of these principles in action. It checks all those boxes by moving assessment into a fast-paced environment where the focus is on what a student knows right now, rather than what they can produce later.
Prompting a chatbot and copy-pasting answers is a clunky, losing strategy in a timed game. By the time the AI generates a response, the student has already lost their momentum and fallen behind the pace of the round.
But the real value is how this format replaces the “one-and-done” nature of an essay with a continuous learning loop. Because the feedback is instant, students can play multiple rounds and internalize the right answers through repetition. It turns the assessment into a tool for growth, where the goal isn’t only to prove what you know, but to learn from your mistakes in real time.
AI as the educator’s assistant
The shift toward interactive learning doesn’t have to mean more work for the teacher. While much of the conversation around AI focuses on how students might use it to bypass an assignment, its real power lies in helping educators build better ones.
By putting these tools in the hands of the teacher, the focus shifts from grading a final product to building the experience itself. With Knoword’s AI tools, teachers can transform a simple idea or an existing document into a playable lesson in seconds:
- Start with a prompt: A simple prompt like “Ancient Egyptian Pharaohs” can instantly generate a custom game. This allows you to move from idea to implementation without spending hours manually drafting clues.
- Import your existing materials: Provide a list of vocabulary words or key terms and let the AI write the clues. You can even include specific context or guidance to ensure the language and difficulty are tailored to your students’ level.
- Test aural knowledge: Use AI to turn written clues into lifelike speech for testing listening comprehension or pronunciation. Shifting the focus from reading to listening creates an assessment format that is inherently more difficult for a standard chatbot to navigate.
A new contract for the classroom
The days of assuming an essay represents a student’s own thoughts are likely over. As Andrew’s experience at the University of Toronto shows, the “broken contract” isn’t a problem we can solve with better detection or stricter rules. We have to change the game entirely.
By moving away from static assignments and toward interactive, real-time environments, we can make the path of least resistance lead back to actual learning. The goal isn’t to win an arms race against AI, but to make it more rewarding to engage with the material.
AI doesn’t have to be the thing that breaks education. When we use it to handle the manual labor of content creation, we free up educators to focus on what they do best: designing experiences that challenge, engage, and teach. The essay may be dead, but the opportunity to inspire active thinking and genuine learning has never been better.
Ready to evolve your assessments? Start building your first AI-assisted learning pack today.