Should students use AI in class?

A pedagogical decision guide on when, how, and why — for teachers and school leaders who want to choose tools with judgement, not hype.

Download the PDF →
Introduction

The question isn't technical — it's pedagogical

The debate on AI in schools has quickly hardened into two camps. One sees AI as a breakthrough that must enter the classroom today — otherwise students will fall behind. The other sees AI as a threat to thinking and wants to keep it out of lessons for as long as possible. Both positions miss the same point: the question of whether students should use AI is neither a technical question nor a principled one. It is a pedagogical question, and it's asked per lesson — not per school district.

Schools exist so that students can learn. That is the only legitimate reason for a tool to take up class time. If a tool helps students learn more or better, it should be used. If it makes them learn less or worse, it shouldn't. The same applies to pen and paper as to an AI chatbot. What's new isn't the principle, but the fact that we now have to apply it to a tool category that in some cases elevates learning and in others short-circuits it.

Johan's note

After 28 years working in and with schools — 10 as a teacher, 7 as a school leader and 11 in educational leadership roles — I've watched several technology waves come and go. The tools that eventually delivered a measurable lift were the ones that let teachers do more of what already worked. The ones that disappeared were those that required teachers to step aside. AI is the most powerful technology wave in a long time, but the pattern is the same: the pedagogical judgement decides whether the tool elevates learning or prevents it from happening.

— JL

What this guide does — and doesn't do

This guide is a decision aid, not a product review. I don't cover which chatbot is best, what the latest model can do, or which prompts work best in subject X. Other material handles that — including the practical prompt sets and tool-specific guides on choosewise.education. This guide answers a more fundamental question: before you let AI into a lesson, what do you need to have thought through?

The guide is divided into three parts plus a checklist. Part 1 covers the pedagogical reason — why schools exist, and how the WISE Framework for Education helps you make tool decisions. Part 2 covers the prerequisites that must be in place before AI meets students, i.e. teacher AI literacy, the ability to evaluate AI output, and knowledge of the regulatory landscape. Part 3 covers what happens on the classroom floor. The checklist in Part 4 is the tool you can carry into actual lesson planning.

Part 1

Learning is the point

A school's primary objective is to deliver what its curriculum demands. Digitalisation and AI have no intrinsic value, but without accounting for technological change in society, we end up preparing students for a society that no longer exists. The objective has grown larger and more complex. Students are meant to learn what the school is training them for. Every tool choice — AI included — should be judged against that yardstick. Learning is the focus; and the more tools a teacher has to draw on, the more often the right tool can be used at the right moment — but only if the teacher actually masters the tools they have access to.

The principle

If the tool elevates learning, use it. Otherwise don't.

This sounds self-evident until you look at how these decisions actually get made. In many schools, decisions about AI tools are made in the IT department, in the leadership team, or by a single enthusiast on the staff — and the question being asked is "should we roll out AI?" That's the wrong question. The right one is "for this class to learn this, does AI elevate learning or not?" The answer may well be no for one class and yes for another, or no for one topic and yes for another — and it needs to be the teacher's decision, because the teacher is the only one with all the relevant facts on the table.

In April 2025 the OECD published Unlocking High-Quality Teaching. It is not an AI report — it is a report on what teaching actually is, grounded in 150 classrooms across 40 countries. The key finding is worth quoting: "In an era defined by rapid innovation and constant change, it is tempting to focus on the latest trends or technologies that promise transformative change. However, refining existing teaching practices by closely examining the current realities of classrooms can be a powerful — and even potentially safer — approach to addressing stagnating student achievement." No other factor inside a school, the report concludes, has a greater impact on student outcomes than the quality of teaching. No technology is going to change that.

Watch out

If the tool decision is made in a different room from the one where the learning is planned, you have a systems problem. Procurement, the school's license portfolio and the AI policy should create possibilities; the decision to use AI in a specific lesson belongs to the teacher planning it.

The decision framework

The WISE Framework for Education — choose tools for learning

WISE is the decision framework I recommend for schools, districts, and independent education providers. It has four steps that should be worked through before the tool is chosen — not the other way round. Applied to the question "should students use AI in this lesson?" it is especially useful, because it enforces the order: pedagogy first, tool last.

W — Weigh the learning goal
What should students actually learn in this lesson? What prior knowledge do they bring? What's the next step in their learning? Always start with pedagogy, never with the tool. If you can't articulate the learning goal without mentioning AI, you've started from the wrong end.
I — Inspect what the subject requires
This subject, this specific topic — what kind of thinking does it demand? An essay in language arts where the student is meant to develop their own voice demands something different from a problem-solving task in mathematics where the student needs fast feedback on their hypotheses. The same AI tool can be a blessing in one case and a shortcut past the actual learning in the other.
S — Select the right tool
Choose the tool that best serves learning for the students you have: pen and paper, a physical book, a digital book, a simpler digital tool, an AI chatbot, no tool at all, or something else entirely. The answer is rarely binary. Sometimes it's "AI for 20 minutes, then pen and paper for the rest of the work." Sometimes it's "no AI for this task — but students should know why."
E — Evaluate the outcome
Did it work? Did students learn better with this tool than they would have with another? It's not enough that the lesson seemed to go well, or that students enjoyed it. It has to show up in what students can actually do afterwards. Adjust and run the cycle again.

A full walkthrough of the WISE framework — with separate versions for teachers and school leaders — is available at choosewise.education/wise/. It's free, shareable, and works as a reference for individual lesson planning as well as for a school's or district's AI strategy.

The central distinction

Challenging thinking, or replacing it

When students use AI, they can do it in two fundamentally different ways that look superficially alike but have entirely different effects on learning. The first is to let AI replace thinking — "write my essay on World War II", "solve these maths problems", "give me three arguments for and against nuclear power". The result is a finished product, delivered without the student doing the cognitive work that learning consists of. The second way is to let AI challenge thinking — "here's my answer, explain why you think the reasoning is off", "I've tried this problem but I'm stuck, which step is most likely wrong?", "I've drafted an op-ed, which counter-arguments are strongest and how would I address them?" The result is that the student works harder, not less, but with a sparring partner that raises the ceiling on the work.

The research is currently mixed, and it's worth being honest about that. In one corner stands a substantial 2025 meta-analysis (Wang & Fan, Computers & Education, 69 studies) showing that ChatGPT in controlled experiments tends to improve students' performance, motivation and higher-order thinking in the short term. In the other corner stands a set of more recent studies pointing to long-term effects in the opposite direction. An EEG study from MIT Media Lab 2025 (Kosmyna et al.) measured brain connectivity while participants wrote essays with or without LLM help — the LLM group had up to 55 percent lower brain connectivity. A parallel study from Microsoft Research (Lee et al., CHI 2025) of 319 knowledge workers showed that high confidence in AI goes hand in hand with less critical thinking. A Swiss study (Gerlich 2025) found a correlation of -0.75 between AI use and critical thinking among younger users.

What does that add up to? Probably that AI makes it easier to perform on today's task while making it harder to build the cognitive architecture required for future tasks. That is a reasonable assessment, not a proven truth — but it should suffice as a precautionary principle in a school context. A student who got good marks in year 9 but can no longer think their way through a hard problem on their own isn't a success story.

Johan's note

Good use of AI in a school context usually means the student starts the work, AI and student co-create, and the student ends by analysing and reflecting. The flow is student — student & AI — student. Lessons where AI does all the work are not what we built the school system for.

— JL

Part 2

Prerequisites before AI becomes part of the lesson

If the teacher doesn't have enough AI literacy to judge whether AI should be used at all, things that shouldn't happen will happen. This can be prevented.

AI literacy

Teacher knowledge isn't a bonus — it's a prerequisite

The most ignored fact in the AI-in-schools debate is that many teachers who are meant to let students use AI in their lessons have themselves never had solid AI training. That's not a complaint about teachers — it's an observation of a systemic gap. In many cases, education authorities have purchased AI tools and made them available to the staff without allocating time for the upskilling those tools require to be used responsibly. The outcome is predictable: students get to use tools whose output the teacher cannot fully evaluate.

In 2024 UNESCO published its AI Competency Framework for Teachers. It defines 15 competencies across five dimensions — among them a human-centred mindset, AI ethics, foundational AI understanding, AI pedagogy, and AI to support one's own learning. The framework doesn't say every teacher must be an AI expert. It says there is a baseline that is a prerequisite for responsibly letting students use AI — and that baseline is not reached by clicking through a pre-recorded 30-minute introduction.

Watch out

If your school doesn't have a stated investment in AI professional development for the whole staff — not just the early adopters — something is out of order. The school leadership needs to engage with the regulatory framework (in the EU, the AI Act; elsewhere, the equivalent in your jurisdiction) and ensure the practices the framework requires are actually in place.

Three foundational skills

Source criticism, bias and hallucinations

Three concepts recur in every conversation about AI in schools, and all three need to be second nature for teachers and students before AI is used in instruction.

Source criticism in the AI era

Classical source criticism involved, among other things, looking at the author, the purpose, when the text was written, and the context. An AI-generated text has no author in that sense — it has a model that produces statistically plausible text for your query, trained on a corpus whose composition you don't know. That doesn't mean the text is useless. It means it has to be checked against external sources before it's used as the basis for anything. Classical source criticism isn't obsolete; it's extended by a new category of text where you can't ask "who said it?" but must ask "what do independent sources say?"

Bias

AI models are trained on data shaped by its context of origin. That means a chatbot trained mostly on English-language internet material will have a more Anglo-American frame of reference than a local one, that underrepresented perspectives remain underrepresented, and that the model sometimes reinforces stereotypes without the person asking being aware of it. For teachers, this means AI output must be scrutinised with the same critical eye you'd bring to a textbook produced in another country — not necessarily wrong, but not neutral either.

Hallucinations

When a chatbot doesn't know, it rarely says "I don't know." Instead it produces text that sounds plausible — including references, figures, quotes and names that don't exist. This is called hallucination and is not a technical bug that will be fixed next week; it's an intrinsic consequence of how today's large language models work. Detecting a hallucination isn't a specialist skill — it's checking claims. But it requires the reader to have subject-matter knowledge to evaluate answers against. A student who doesn't have that is defenceless.

The myth of technical superiority

Teachers don't need to know the tech better than students

A common excuse for not bringing AI into lessons is "the students know this stuff better than I do." That's often true in a superficial sense — a 14-year-old can navigate an interface faster than many adults — but it's wrong in the sense that actually matters. The student doesn't always know what an LLM is. The student rarely knows how to craft a prompt that elicits deeper answers. The student can almost never judge whether the answer is true. And above all: the student doesn't have the teacher's understanding of what the lesson is meant to teach.

That last point is the teacher's expertise, and it's irreplaceable. So the more constructive framing of the same situation is this: teachers don't need to know the tech better than students, but teachers need to know the pedagogy better — and then you can learn the tech together. This works, provided the teacher holds on to the role of setting learning goals and judging results, and uses the students' tech familiarity as a resource in the classroom rather than a threat.

Johan's note

I've met schools where a few enthusiastic students became classroom assistants to the teacher specifically on chatbot handling and prompt writing — not in the pedagogical decision-making. Both students and teachers benefited. The students got a responsibility role and grew from the trust placed in them; the teacher gained access to tech familiarity without having to become a technician. The condition is that responsibility for learning is never handed over to the students themselves — it stays with the teacher.

— JL
Regulation

GDPR and the EU AI Act — what teachers need to know

Disclaimer: I am not a lawyer. This section is a pedagogical orientation in the regulatory landscape, not legal advice. For binding guidance, consult your organisation's data protection officer, a lawyer, or the relevant regulator.

If you are in the EU, two frameworks apply: the EU AI Act, which classifies AI systems by risk level and places stricter requirements on "high-risk systems" than on lower-risk ones; and the GDPR, which regulates all processing of personal data — including what happens when a student types something into an AI chatbot. Both apply. Both are the teacher's and the school authority's responsibility, and neither can be postponed.

If you are outside the EU, different rules apply: in the US there is no single federal AI regulation but state-level rules (California, Colorado and others) exist; FERPA governs student data. In the UK there is UK GDPR. In other jurisdictions, check with your country's data protection authority and any AI-specific regulation. The principle this section describes — that AI used for consequential decisions about a student needs stricter handling than AI used for lesson planning — is widely applicable even when the exact legal wording differs.

The EU AI Act — what the classification means (EU)

For routine teaching work — lesson planning, helping a student phrase something, explaining concepts — AI use falls in the minimal risk category and the requirements are relatively light. Where things get serious is when AI is used to make or support decisions that affect a student's access to education. Then the Act speaks of high risk. Examples: AI that grades or assesses student performance, AI used in admissions or transitions between school forms, AI that evaluates student conduct or determines special educational needs. Here there are documentation requirements, transparency requirements towards the student, and requirements for human review before a decision takes effect.

For the individual teacher, the most important conclusion is this: AI can support your judgement, but it must be you who judges. If an AI service is used in a way that replaces a teacher's judgement of a student — not just suggests, but decides — it's likely an area where the AI Act imposes stricter requirements. Involve leadership and the school authority before such use is introduced. If you are in any doubt about where the line is or what you're permitted to do — check with your school leadership.

GDPR in practice (EU) / equivalent data protection rules elsewhere

GDPR applies every time a personal data point ends up in the AI system. That includes a student's name, a student's written work, guardians' contact information, information about health or special needs — anything that can identify a young person. Two practical consequences:

  • Don't upload student data to AI services without first verifying that an approved data processing agreement exists for that service. Many consumer chatbots (e.g. ChatGPT without a specific school contract) do not meet this bar.
  • Anonymise when in doubt. Replace student names with roles ("Student A"), generalise specific dates and locations, strip anything that can identify a person.
Rule of thumb

If you're wondering whether something is okay under your data protection regime — treat it as not okay until someone has confirmed otherwise. Better to pause a lesson and ask than to discover later that a student's personal data has ended up in an AI training pipeline abroad.

Part 3

On the classroom floor

You cannot say "students may use AI" and then leave the room. We've seen that pattern before, and the research from the 1:1 laptop era tells us what it leads to.

We've seen this before

What the 1:1 laptop era taught us

When schools rolled out "one laptop per student" (1:1) in the 2010s, researchers tracked what happened. The research record is clear and internationally consistent. OECD's PISA report Students, Computers and Learning (2015) found what became known as the "digital paradox": students who used computers frequently at school performed worse on PISA, even after controlling for socioeconomic background. The US economics study Carter, Greenberg & Walker (2017) ran a randomised controlled trial at the US Military Academy and found that classes where laptops were permitted had 0.18 standard deviations lower scores than classes where they weren't. A meta-analysis of 1:1 programmes in Review of Educational Research (Zheng et al. 2016) concluded that positive effects existed — but only where teachers integrated the devices pedagogically, not simply because the technology was present.

The most cited Nordic longitudinal study — Grönlund et al., Örebro University, 2011–2014 (the Unos Uno project, covering roughly 20 schools in 11 municipalities) — reached a similar conclusion with striking directness: the schools that improved their results with 1:1 had more teacher-led time, not less. Where the teacher stepped back and let the device "take over," results fell. Where the teacher reinforced their own pedagogical work with the device as an aid, and stayed closer to students' work, results improved. The study also showed that solitary work increased in 1:1 classrooms, that social media distracted a substantial share of students, and that weaker students were hit hardest — precisely the students schools exist to help.

Grönlund's summary line is worth remembering: "A school IT project isn't a tech project, it's a change project." Technology doesn't elevate learning on its own. It amplifies the pedagogical choices the teacher has already made. If those choices are good, they get better; if no choices have been made at all — if students are simply turned loose in a digital environment — you get what the 1:1 research consistently documented: more solitary work, more distraction, worse learning, especially for those already struggling.

Note

The conclusion is not that AI should be kept out of classrooms. The conclusion is that AI in the classroom without teacher oversight and pedagogical framing will repeat the 1:1 era's mistakes — with greater force this time, because AI is a far more capable tool for outsourcing thought than Google and Wikipedia ever were.

The technical gap

The visibility gap — today's AI tools don't show teachers what students are asking

The biggest practical gap between "what schools need" and "what the AI industry delivers" right now concerns teacher visibility. The largest chatbots in K-12 classrooms today are Gemini (Google) and ChatGPT (OpenAI). Copilot is built on ChatGPT. Apple Intelligence uses ChatGPT for the functions that go beyond the device. K-12-specific products include Khanmigo, MagicSchool, and ChatGPT for Teachers. What the majority of these tools share, according to what's publicly advertised, is that they do not give teachers a clear aggregated view of which questions students in a class have asked the AI during a lesson.

Why is that a problem? Because teachers need it to teach. If a teacher can see that twelve of twenty-five students asked the AI about the same thing in similar ways — e.g. "what's the difference between an allele and a gene?" — that's a strong signal that something in the explanation didn't land, and that a short whole-class discussion would lift understanding across the room. If the teacher has no visibility at all, that signal exists only as quiet green lights in the classroom while twelve students believe they've understood. That is precisely the situation the 1:1 research described, and one AI makes sharper rather than softer, because AI answers sound authoritative even when they're wrong.

There is one notable exception in the K-12 space. SchoolAI offers an aggregated class-level dashboard that surfaces common misconceptions and recurring topics across the class. It shows the feature is technically possible. The question schools and buyers should ask every other vendor is: why isn't this standard? When enough schools start requiring aggregated class-level visibility as a purchase criterion — not surveillance of individual students, but the pedagogical feedback teachers need to do their job — the feature will get built by more vendors.

What education authorities say

Guidance from major education bodies (UNESCO, OECD, and national curriculum authorities) converges on a practical point: start together, visibly. Using AI jointly with students on a large screen — where factual errors and biased responses can be discussed in real time — is the single best first step. That advice carries regardless of jurisdiction because it doesn't depend on the tool; it depends on the teacher.

Digital classroom management

A classroom management tool isn't a luxury — it's a prerequisite

Whether or not your school uses AI, it is by now reasonable to recommend that schools acquire a classroom management tool that governs what students can do on their devices during class. This isn't about distrusting students; it's about the teacher's job to create a learning environment where students succeed — and they don't, if they're one click away from TikTok during the lesson. The 1:1 research was explicit on this point: the problem wasn't the students, the problem was that teachers lacked the tools to manage the digital environment in the classroom, which meant responsibility got abdicated to the student.

When AI becomes part of the lesson environment, the need grows. A management tool that can open and close access to specific AI services for specific lessons — so that a student has access to one approved AI tool for a task but not to the whole internet during that time — is a baseline requirement for responsible in-class AI use. Combined with aggregated class visibility (if the vendor offers it), teachers suddenly have the instruments the 1:1 researchers were calling for back in 2014.

Johan's note

None of the districts I've worked with that have made the most progress on AI have done so by buying the AI tool first. They all started by making sure teachers got professional development, that schools had management tools for the digital classroom environment, and that a clear AI policy with defined responsibilities existed. Then the tools arrived. The order isn't a detail — it's the work itself.

— JL

Part 4

Checklist: Should students use AI in this lesson?

Twelve questions in four categories. Inspired by Discovery Education's "Should AI Be Used in Schools?" but reframed for lesson-level decisions, not policy decisions. Works as a quick check before a lesson and as the basis for collegial conversations.

Pedagogy

  1. What learning should this lesson produce, and does AI elevate that learning or replace the work students need to do themselves? If the second half of the answer is "replace," choose a different tool.
  2. Does AI challenge the student's thinking, or replace it? "Explain why my answer is wrong" challenges; "write my essay" replaces.
  3. Would the same learning outcome be reached equally well or better without AI? If yes, and AI adds nothing beyond convenience, don't use AI.
  4. Does AI use fit the subject and topic? (I in WISE — Inspect what the subject requires.)

Prerequisites

  1. Do I as the teacher have enough AI literacy to evaluate students' AI responses and catch bias and hallucinations? If not — wait on using AI for this task until you do, or plan the lesson so you evaluate AI responses together as a class.
  2. Have students practised source criticism of AI answers? If not — do a short joint exercise on the large screen before the actual AI lesson.
  3. Is the tool compliant with your jurisdiction's data protection rules and approved by your school authority? In the EU this means GDPR compliance with a signed data processing agreement; outside the EU check the equivalent rules (e.g. FERPA in the US). If in doubt — ask your data protection officer or IT coordinator before the lesson, not after.

Oversight

  1. Do I have visibility into what students are asking the AI during the lesson — at least at the aggregated class level? If not, the lesson depends on you moving around the classroom, looking over students' shoulders and listening. Plan the time for that.
  2. Does the school's management tool or clear rules prevent students from leaving the assigned task? If the school lacks a management tool — use explicit AI time in short blocks and combine with analogue segments.
  3. Can I interrupt and hold a short whole-class discussion if I notice the class is stuck on the same point? Plan for the possible interruption in the lesson design from the start.

Follow-up

  1. How will I assess whether students actually learned more or less than they would have without AI? (E in WISE — Evaluate the outcome.) Plan a concrete follow-up — a short quiz, an oral presentation, a written reflection — where the student, without AI, has to demonstrate what they've learned.
  2. What signs of cognitive outsourcing should I be looking for? E.g.: answers that are too complete and too abstract for the student's level, absence of the typical mistakes the student usually makes, loss of the student's own voice in text.
Rule of thumb

If you can answer yes to all twelve, this is likely a good AI lesson. If you answer no to several of them — replan, or choose a different tool. The checklist is an aid, not bureaucracy: it runs fastest once you've used it a few times, and the point is to ask the questions in the same order every time.

FAQ

Frequently asked questions

I don't know AI myself — should I wait until I've learned it?

No, but you should hold off on letting students use AI freely on their own. There's an intermediate mode that international guidance (UNESCO, OECD) consistently recommends: start together on a shared screen. You and the class ask questions together, read the answer together, discuss what's right, wrong, biased or hallucinated — together. That's a powerful pedagogy in its own right; it lifts your own AI literacy while students learn source criticism in the AI era, and it requires no teacher expertise beyond the judgement you already have.

What should I require from a vendor before buying an AI tool for students?

Three minimums: (1) data protection compliance appropriate to your jurisdiction, with a signed processing agreement approved by the school authority; (2) teacher visibility — at least at the aggregated class level, so teachers can see recurring questions and patterns during a lesson; (3) classroom control — the ability to open and close access during specific lesson blocks. If the vendor lacks points 2 or 3, require a timeline for when they will. That kind of demand is what eventually moves the industry.

What do I do if the school doesn't have a device management tool?

Raise it with the principal and the IT coordinator. This is a school authority issue, not an individual teacher issue. In the meantime — choose AI lessons where students use AI in short, structured blocks (10–15 minutes) rather than the whole lesson, and alternate with analogue segments where the device is closed. The constraint isn't optimal, but it's manageable.

How do I tell if a student has outsourced the work to AI?

It's rarely a single signal that gives it away. Common patterns: the text is more competent than what you've previously seen from the student without a visible explanation, the student's own voice is gone, the structure is suspiciously polished, the kinds of mistakes the student usually makes are absent. AI detectors are unreliable and should not be used as evidence. The easier path is often assessment design: if you regularly include oral components, if writing happens in the classroom, or if there's process documentation, it becomes harder to hide behind AI — and the problem solves itself from a different angle.

What does the EU AI Act mean for my classroom — the short answer?

For most everyday work — lesson planning, differentiation, support for individual students — very little. You're in the minimal-risk category and normal data discipline is enough. The AI Act matters primarily when AI is used to assess or decide about the student — admissions, grading, behavioural judgement or special needs determinations. If your school is considering such use, involve the data protection officer and leadership before anything is introduced. See Part 2 · Regulation above for the full walkthrough. If you are outside the EU, the same principle applies under different rules: consequential AI about students needs stricter handling than AI for lesson planning.

Do we need a Data Protection Impact Assessment (DPIA) before introducing AI?

It depends on the use. For routine teacher assistance — where no student personal data is entered into the system — a DPIA is rarely required. For uses where student data may be processed, where AI supports decisions about the student, or where the use involves systematic monitoring, a DPIA is often required (under GDPR Article 35 in the EU; consult equivalent rules elsewhere). Consult your data protection officer. In the EU, national DPAs publish guidance; outside the EU, check your country's privacy regulator.

Is AI good or bad for student learning?

Wrong question. AI is a tool, and tools are good or bad only in relation to the learning they're used for. The same AI chatbot can be a gift for a student who uses it to get their reasoning challenged, and a disaster for a student who uses it to avoid thinking. The interesting question is: "For this student, in this lesson, for this learning — is AI the best tool choice?" That's what the WISE framework and this checklist are for.

Where do I get help with AI policy for my school?

Start with the resources you have at your school and with your education authority. If you need external help, check with the reseller or platform partner you already have an agreement with. If those resources aren't sufficient, you're welcome to reach out to me. I work consultatively with schools and education authorities on these questions — AI policy, staff development, rollout aligned with the WISE framework, and strategy in the light of the EU AI Act. Get in touch via LinkedIn (link in the footer).

Take the guide with you

Download the portable PDF — same content, premium layout, perfect for offline reading or for sharing with colleagues.

Updated April 2026.

License

This work is licensed under CC BY-NC-SA 4.0. You may copy, share and adapt the material for non-commercial purposes, as long as you credit Johan Lindström as the source and keep the same license on any derivative works.