Weekly Briefing #3 · Week of 10 March 2026

AI LandscapeThrough Three Lenses

40%
The Number That Matters
40% of employees globally are worried about losing their job to AI — up from 28% in 2024. A 12-point jump in a single year. 62% say their leaders underestimate the emotional impact. 97% of investors will negatively evaluate firms that fail to upskill workers on AI. Mercer Global Talent Trends 2026 · 12,000 respondents across 17 industries

AI as Tool

What educators can use

Frontier AI on a £100 device — no cloud, no subscription, no data leaving the building

Google DeepMind's Gemma 3 runs on a standard laptop or phone: 140+ languages, 128K context window, competitive with large closed models. Removes Big Tech subscription dependency and keeps student data on-device. Changes the institutional AI access model entirely — relevant everywhere, not just where hyperscaler infrastructure is affordable.

Google DeepMind · arXiv:2503.19786

AI grading now reliable at the borderlines — the hard problem is solved

The GUIDE framework (Chu et al.) addresses the core objection to AI grading at scale: what happens when responses look similar but score differently? Using contrastive "boundary pairs", it outperforms standard retrieval across physics, chemistry, and pedagogical content knowledge, and generates its own discriminative rationales, reducing the need for manual expert curation.

Chu et al. · arXiv:2603.00465

OECD: purpose-built educational AI outperforms general-purpose chatbots

The OECD Digital Education Outlook 2026 — drawing on data from 247 countries — finds that AI co-designed with teachers for specific pedagogical goals shows durable learning gains. General-purpose chatbots improve output quality but gains disappear when AI is removed. The finding is clear: the design of the tool determines the outcome. General access is not the same as educational benefit.

OECD Digital Education Outlook 2026

AI as Catalyst

Why human intelligence matters more

The OECD has a name for it: the mirage of false mastery

Across 247 countries, the OECD finds that GenAI improves task performance but gains disappear in unassisted conditions — exams, job performance, independent problem-solving. Students learn to use the tool, not to think. The gap between "performing with AI" and "capable without it" is now an institutional design challenge with a name. Assessment, curriculum, and pedagogy all need to respond to it.

OECD Digital Education Outlook 2026

Learning with AI and performing with AI are not the same thing

A controlled study of 52 professional programmers found that those who learned tasks with AI assistance showed significantly weaker performance on subsequent unassisted tests. The proposed mechanism: cognitive offloading during skill acquisition prevents the durable encoding needed for independent performance. Small sample — treat as directionally significant. Combined with the OECD finding, the experimental and institutional evidence are converging.

Shen & Tamkin · arXiv:2601.20245 · 2026

77% of Gen Z say it matters their future job is hard to automate

The generation most immersed in AI is the most anxious about its career implications. Careers guidance must do more than point to upskilling pathways — it must address the psychological dimensions of automation anxiety and equip students with genuine frameworks for evaluating occupational exposure, not just react to headlines. Students are already self-selecting away from AI-exposed roles.

Jobber Survey · 2026

AI as Subject

What power users need to know

China's open-weights models are closing the capability gap globally — at one-sixth the price

RAND finds Chinese LLMs gaining market share in developing countries at one-sixth to one-quarter the price of US rivals. Stanford's AI Index shows capability gaps closing to single digits on key benchmarks. Open-weights models (Qwen 3.5, GLM-5) support 140+ languages at no cost. If these become the default in Global South education systems, the values, limitations, and content moderation choices embedded in those models shape what billions of learners experience as "AI." This is a geopolitical and pedagogical question simultaneously.

RAND Corporation · Stanford AI Index 2026

Voluntary safety frameworks are structurally unstable under competitive pressure

When Anthropic drew hard limits on military use of Claude, the Pentagon designated it a "supply chain risk" — a label previously reserved for Chinese firms. OpenAI struck a deal on near-identical safety terms within hours. The lesson: competitive pressure means the safety floor falls to the least cautious player. Voluntary commitments cannot hold when the alternative is losing a major contract. This is the governance landscape for every AI tool deployed in education.

Axios · CNN · The Hill · February 2026

AI regulation is diverging globally — and the patchwork will reach every classroom

The EU AI Act is now in force. The UK is bringing AI chatbots within the scope of the Online Safety Act. India is pushing an AI commons for the Global South at its 100-country summit. Different jurisdictions are making fundamentally different choices about what AI tools can do, who is liable, and what protections exist for children. What is permissible in schools is being decided jurisdiction-by-jurisdiction, right now — and procurement decisions made today will have compliance implications tomorrow.

EU AI Act · UK Online Safety Act · India AI Summit · 2026