Roadmap
What we're building next
Our roadmap is guided by a simple question: what does the field need that no one else is building? The answer, overwhelmingly, is open infrastructure — open assessments, open data, open tools, open research.
Open Access
All research published openly. No paywalls, no vendor gatekeeping. Evidence belongs to the field.
Open Source
All tools we build are open source. Anyone can use, modify, and contribute. No lock-in.
Open Data
Assessment data, psychometric parameters, research datasets — published openly for anyone to use and build on.
Open Assessments at Scale
The field's biggest blind spot
There is remarkably little openly available assessment data in education. Most items are proprietary, most psychometric data is locked behind vendor agreements, and researchers have to build from scratch every time. We're changing that.
- —Expand our open item bank from 34K to 100K+ CC-licensed assessment items across K-12 math and reading
- —Publish item-level psychometric data — difficulty, discrimination, response distributions — for every item
- —Build open adaptive testing infrastructure any researcher or district can deploy
- —Create open datasets of student response patterns for AI training and evaluation
- —Develop synthetic student simulation for instant calibration of new items without live data collection
Why this matters
Open assessments with open data are foundational infrastructure. Without them, every researcher starts from zero. With them, the entire field can build on shared evidence.
AI Qualitative Research Tools
Understanding the human side at scale
Quantitative data tells you what happened. Qualitative data tells you why. AI-powered interviews — by text and voice — make it possible to conduct deep qualitative research at a scale that was previously impossible.
- —Expand AI interview tools to support multi-language qualitative research
- —Develop real-time analysis dashboards for ongoing interview studies
- —Publish open protocols for AI-assisted qualitative research in education
- —Create training materials for researchers using AI qualitative methods
Why this matters
Educators' and students' experiences matter. Qualitative research has always been limited by the time it takes to conduct and analyze interviews. AI changes that calculus entirely.
Public Research Database & Hub
Making the evidence findable and actionable
The research on AI in education is scattered across dozens of venues, paywalled, and hard for practitioners to find. We're building a public, searchable database of relevant papers — and an active research hub where we conduct new studies and publish openly.
- —Curate and maintain a public database of AI-in-education research papers
- —Provide practitioner-friendly summaries and evidence ratings
- —Host active research projects — like our AIED 2026 LLM evaluation work — with open data and open code
- —Create a living evidence map: what we know, what we don't, and where the gaps are
Why this matters
Practitioners can't use evidence they can't find. Researchers can't build on work they don't know about. A curated, open research database is foundational infrastructure for the field.
New Learning Priorities for the AI Era
Are we teaching the most important things?
The Common Core has no sense of priority across objectives — no way to say what matters most. The process of adding new standards takes years. But AI fundamentally changes what students need to know. What does literacy mean when AI can write? What math matters when AI can compute? How do diverse students learn to evaluate and use AI tools? These are urgent questions with no good answers yet.
- —Research and define what students actually need to learn in an AI age — across disciplines, not just 'AI literacy' as a standalone
- —Develop priority frameworks — which objectives matter most, given that AI changes the landscape
- —Build assessments for emerging competencies before standards bodies catch up
- —Study how diverse student populations interact with AI tools differently — equity in new learning objectives, not just access
- —Create rapid feedback loops between research findings and objective-setting, bypassing the multi-year standards revision cycle
Why this matters
This is the foundational question. If we're not teaching the most important things, it doesn't matter how well we measure or how good our tools are. Education needs a mechanism for rapidly rethinking priorities — and right now it doesn't have one.
Measurement Frameworks
Defining what 'works' actually means
When a vendor says their AI tool 'improves learning,' what does that mean? We're developing rigorous frameworks for measuring AI's impact on learning — going beyond engagement metrics to actual learning outcomes, equity effects, and long-term retention.
- —Publish open measurement frameworks for evaluating AI tutoring tools
- —Develop equity-focused evaluation criteria that surface differential impact across student populations
- —Create practical rubrics districts can use to evaluate AI tool claims
- —Establish evidence standards — what counts as rigorous evidence of learning impact
Why this matters
Without shared measurement standards, the field can't distinguish tools that work from tools that merely engage. Frameworks create accountability.
Practitioner Training & AI Literacy
Evidence-based decisions, not fear-based ones
Educators are making decisions about AI tools right now — often without training, evidence, or institutional support. We're building training programs that help teachers and districts make evidence-based decisions about AI.
- —Launch AI Literacy Lab — hands-on training for educators
- —Develop district-level AI integration playbooks grounded in evidence
- —Create train-the-trainer programs that scale through educator networks
- —Publish freely available guides, rubrics, and evaluation checklists
Why this matters
The gap between AI adoption and AI understanding is dangerous. Teachers deserve support that's based on evidence, not vendor demos.
Global Research Network
Connecting the people who need to be in the same room
The people studying AI in education, the people building AI education tools, the people using those tools in classrooms, and the people making policy about them — they're often not talking to each other. We're building the connective tissue.
- —Annual convening of researchers and practitioners working on AI in education
- —Working groups on assessment, equity, safety, and efficacy
- —Policy briefs that translate research findings into actionable guidance
- —International partnerships — including work with AI for Education (Paul Aetherton) and Rising Academy Network (Shabnam Aggarwal)
Why this matters
No single organization can solve this. The field needs infrastructure for collaboration — shared tools, shared data, shared research agendas.
Want to fund or collaborate on this work?
We're looking for foundations, researchers, and districts who want to build open infrastructure for the field.