Beyond the Label: Learning Design in an AI Age

Silhouette of Patterson writing at a desk under a lamp, with chalkboard-style AI and grammar formulas above

No teacher can guarantee that learning will happen. What matters is the learning environment a tool creates: does it outsource thinking, or does it keep students actively engaged? A reflective piece on learning design, ownership, and evaluation, beyond hype and fear.


Deutsch

AI-assisted tutoring system

We are living in a loud moment. One camp speaks about generative AI as if it will save education by force of brilliance alone, the other warns it will hollow learning out from the inside. The debate keeps landing in the same exhausted place: euphoric certainty or anxious dismissal.

But learning has never belonged to certainty.

After eighteen years of teaching, I don’t claim to “cause” learning in another human being. No educator can honestly guarantee that learning will happen. The best we can do is shape, facilitate, and lightly moderate a conducive environment for it to happen, in the hope that it does. That has always been a professional ethic of good teaching: you create conditions in which thinking can take root, where practice becomes possible, where feedback arrives in a form the learner can use, and where effort doesn’t dissolve into shame.

So, when people ask whether AI is “good” or “bad” for education, I start with a different question: what kind of learning environment does this tool create? Does it mainly make it easier to outsource the thinking, or does it structure the thinking, so the student still reflects, creates, and applies themselves, so that the student still essentially does the work?

That is where Sybille lives. (Media) Not as a legacy label. Not as a guarantee. But as a teacher-built, exam-aligned AI-assisted tutoring system designed to make learning more likely, not to pretend that learning is automatic. And in English Abitur, that distinction matters. In practice, this exam is not about regurgitating facts or solving grammar puzzles. It is about articulating competences in a way that can still feel organic, yet remains methodical, controlled, and defensible on paper.

Nachhilfe Berlin fürs Englisch-Abitur zu Hause: Eltern und Schüler arbeiten ruhig am Küchentisch an einem Text.

Performance is not the same thing as learning

The OECD puts its finger on a core danger with generative AI: it can help students to complete tasks, but task completion is not the same thing as learning. When students rely on GenAI to do the cognitive work for them, the surface can improve while the underlying competence fails to develop in the way education intends.

What matters here is not morality, it is mechanics. Learning needs effort of a particular kind: the effort of selecting, structuring, revising, and transferring a method to a new situation. If a tool (any tool) removes that effort entirely, it may increase output while reducing growth. The OECD’s 2026 outlook argues for treating GenAI as a “learning partner, not a learning shortcut,” and stresses the need to move beyond generic chatbots toward purpose-built tools with explicit pedagogical intent, then evaluate them accordingly.

There is also a cultural maturity happening around these tools. Early on, many people used GenAI to prove to themselves, and to others, that it can produce. The more grown-up use is when the tool steps back. Not eager to impress, not hungry to perform, but steady enough to scaffold from the side: shaping, prompting, directing attention, and allowing learning to unfold.

For parents, the practical meaning is simple and quietly sobering. A student can look more competent on paper while becoming less able to produce that competence independently, especially under exam pressure. The text can improve inside strictly structured environments while the writer stagnates. And if we don’t acknowledge that risk, we end up arguing about the wrong things: banning the tool, celebrating the tool, moralizing the student, while the real issue may be design. And to be fair, traditional methods have also been known produce unbalanced outcomes. The difference now is not that education suddenly became fragile, but that the shortcut has become unusually tempting, and unusually fluent.

What older tutoring systems got right, and what we can keep without pretending it’s 1999

Long before large language models, the field of intelligent tutoring systems built its reputation on an ethic that is still worth keeping: don’t just provide answers. Diagnose what went wrong. Offer scaffolded hints. Encourage transfer, so the learner can apply a strategy to a new but similar task. Track progress over time, not as surveillance, but as a way of deciding what the learner needs next. In other words, tutoring is not only the delivery of correctness. Tutoring is the careful shaping of conditions in which understanding can form.

That approach is not just theory. A major meta-analytic review of intelligent tutoring systems reports meaningful learning effects overall, while also stressing that results depend on implementation quality and on how learning is assessed. A widely cited review by VanLehn similarly argues that “tutoring” is not one thing, effectiveness depends on the tutoring interaction that the system actually produces.

But here is the part that matters now: you don’t have to worship the legacy label to respect the legacy lesson. An AI tool doesn’t become educational simply because it can speak fluently. And a system doesn’t become trustworthy simply because it carries an academic title. What matters is whether the tool behaves in ways that support learning rather than replacing it, and whether its claims can be evaluated without hype. This is why I don’t rush to call Sybille an “ITS.” Not because the label is forbidden, but because labels can become camouflage. Sybille embraces her youth and runs with her LLM peers with a different ambition: to be judged by learning design.

In my years of teaching and learning, I’ve also met brilliant educators who are deeply qualified in their domain, yet still struggle to reach students, because expertise can harden more toward performance than transfer. The question is rarely whether they know the subject. The question is whether their practice meets learners where they are, whether it models the transfer it wants to see, and whether the lesson leaves the student bigger rather than smaller.

The learning contract: hands, feet, and the built environment

In addition to cognition’s brain, we have hands and feet. We have habits. We have practices that become memory through repetition inside built environments: desks, kitchen tables, notebooks, screens, quiet routines at ordinary hours.

Tools have always lived inside learning. A saw cuts wood or rock or steel, not mind power. A calculator stabilizes arithmetic. A spell-check catches slips. A well-designed feedback loop shapes a writer. None of these tools are “cheating” by default. They become harmful only when they replace the growth they were meant to support. That is the learning contract I care about: the student must remain the author of the outcome, and gradually the owner of the method too.

So what does a defensible AI-assisted tutoring system do, without pretending it can guarantee learning? It tries to keep the student in the works. It creates space for practice rather than a shortcut out of practice. It turns requirements into steps, and steps into repeatable habits. It produces feedback that leaves traceable learning artifacts, not just a finished answer, but a reason, a pattern, a next move the learner can attempt again. It can be configured to prompt reflection and transfer: can you explain what you did, why you did it, and how you would do it again under pressure? It should not rule by humiliation, punishments, or status games. It should preserve dignity while staying precise.

This is not a promise that learning will happen. It is a promise that the environment will not be designed to let learning be silently bypassed. And that stance is not only ethically safer, it is strategically aligned with the OECD’s warning about outsourcing thinking while confusing performance for competence. The point is not “never use the tools.” The point is that pedagogy matters, design matters, and evaluation matters.

So I don’t sell certainty, and neither does Sybille. I don’t sell a guarantee. I don’t even sell the romance of a label. I build and moderate a specialized learning environment. I encourage reflection and ownership of outputs. I invite, simplify, and standardize evaluation. And I choose a tool philosophy that refuses to confuse polished output with real capability, because in the end the student is the one who must write the exam, defend their reasoning, and carry the skill forward into their life.


Acknowledgement (SPIEGEL framing)
A recent SPIEGEL interview (sent to me by a friend as a photograph) captured the current mood perfectly: we talk about AI in education either “Viel zu euphorisch oder viel zu kritisch” (SPIEGEL Magazin, Feb. 2026). I’m not interested in either extreme. I’m interested in learning design, student ownership, and evaluation.

Footnote / descriptor
Sybille is a teacher-built, exam-aligned AI-assisted tutoring system.
Teacher-built, exam-aligned tutoring, purpose-built for structured feedback, safe boundaries, and student ownership.

References
OECD (2026). OECD Digital Education Outlook 2026: The State of Digital Education. OECD.
Kulik, J. A., & Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: A meta-analytic review. Review of Educational Research, 86(1), 42–78. (Accessible summary/record.)
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. (Author-hosted PDF.)
TrainingTree article (German)
SPIEGEL Magazin (June 2026). “Viel zu euphorisch oder viel zu kritisch.” Interview with Prof. Dr. Ute Schmid. (Source available as a photograph provided to the author, no public link included.)

Ian Antonio Patterson - www.iAntonio.com

Comments

6 responses to “Beyond the Label: Learning Design in an AI Age”

  1. Barbara Chaney avatar
    Barbara Chaney

    You’re so awesome! I don’t believe I have read a single thing like that before. So great to find someone with some original thoughts on this topic. Really.. thank you for starting this up. This website is something that is needed on the internet, someone with a little originality!

  2. Eric Sandoval avatar
    Eric Sandoval

    Very well presented. Every quote was awesome and thanks for sharing the content. Keep sharing and keep motivating others.

  3. Dane Lutz avatar
    Dane Lutz

    I appreciate you sharing this blog post. Thanks Again. Cool.

    1. Patterson avatar
      Patterson

      Thanks Dane, I hope it helped you somehow.

  4. Tyrone Munoz avatar
    Tyrone Munoz

    I really like reading through a post that can make men and women think. Also, thank you for allowing me to comment!

    1. Patterson avatar
      Patterson

      Hey Tyrone, thank you. Being a content creator, blogger, podcaster – can be lonely in the midst of all the spam – looking to plant links. I am glad you enjoyed it.

Leave a Reply

Your email address will not be published. Required fields are marked *