Google DeepMind's Gemma 3 runs on a standard laptop or phone: 140+ languages, 128K context window, competitive with large closed models. Removes Big Tech subscription dependency and keeps student data on-device. Changes the institutional AI access model entirely — relevant everywhere, not just where hyperscaler infrastructure is affordable.
Google DeepMind · arXiv:2503.19786The GUIDE framework (Chu et al.) addresses the core objection to AI grading at scale: what happens when responses look similar but score differently? Using contrastive "boundary pairs", it outperforms standard retrieval across physics, chemistry, and pedagogical content knowledge, and generates its own discriminative rationales, reducing the need for manual expert curation.
Chu et al. · arXiv:2603.00465The OECD Digital Education Outlook 2026 — drawing on data from 247 countries — finds that AI co-designed with teachers for specific pedagogical goals shows durable learning gains. General-purpose chatbots improve output quality but gains disappear when AI is removed. The finding is clear: the design of the tool determines the outcome. General access is not the same as educational benefit.
OECD Digital Education Outlook 2026Across 247 countries, the OECD finds that GenAI improves task performance but gains disappear in unassisted conditions — exams, job performance, independent problem-solving. Students learn to use the tool, not to think. The gap between "performing with AI" and "capable without it" is now an institutional design challenge with a name. Assessment, curriculum, and pedagogy all need to respond to it.
OECD Digital Education Outlook 2026A controlled study of 52 professional programmers found that those who learned tasks with AI assistance showed significantly weaker performance on subsequent unassisted tests. The proposed mechanism: cognitive offloading during skill acquisition prevents the durable encoding needed for independent performance. Small sample — treat as directionally significant. Combined with the OECD finding, the experimental and institutional evidence are converging.
Shen & Tamkin · arXiv:2601.20245 · 2026The generation most immersed in AI is the most anxious about its career implications. Careers guidance must do more than point to upskilling pathways — it must address the psychological dimensions of automation anxiety and equip students with genuine frameworks for evaluating occupational exposure, not just react to headlines. Students are already self-selecting away from AI-exposed roles.
Jobber Survey · 2026RAND finds Chinese LLMs gaining market share in developing countries at one-sixth to one-quarter the price of US rivals. Stanford's AI Index shows capability gaps closing to single digits on key benchmarks. Open-weights models (Qwen 3.5, GLM-5) support 140+ languages at no cost. If these become the default in Global South education systems, the values, limitations, and content moderation choices embedded in those models shape what billions of learners experience as "AI." This is a geopolitical and pedagogical question simultaneously.
RAND Corporation · Stanford AI Index 2026When Anthropic drew hard limits on military use of Claude, the Pentagon designated it a "supply chain risk" — a label previously reserved for Chinese firms. OpenAI struck a deal on near-identical safety terms within hours. The lesson: competitive pressure means the safety floor falls to the least cautious player. Voluntary commitments cannot hold when the alternative is losing a major contract. This is the governance landscape for every AI tool deployed in education.
Axios · CNN · The Hill · February 2026The EU AI Act is now in force. The UK is bringing AI chatbots within the scope of the Online Safety Act. India is pushing an AI commons for the Global South at its 100-country summit. Different jurisdictions are making fundamentally different choices about what AI tools can do, who is liable, and what protections exist for children. What is permissible in schools is being decided jurisdiction-by-jurisdiction, right now — and procurement decisions made today will have compliance implications tomorrow.
EU AI Act · UK Online Safety Act · India AI Summit · 2026