Research
Research questions for AI-accelerated education
These are the specific questions driving our work, organized into four areas. Each is concrete, answerable, and designed to produce evidence the field can act on.
New Learning Priorities
What should students actually learn now?
AI changes what skills matter. We need faster methods to identify shifting priorities and translate them into curriculum — not in five-year cycles, but continuously.
What should students learn in an AI age?
Core academic content remains, but the balance between knowledge, skills, and dispositions is shifting. Which competencies gain importance when AI handles routine cognitive tasks?
How do we rapidly identify new learning priorities?
Traditional curriculum revision takes years. Can AI-accelerated methods — labor market analysis, expert elicitation, student outcome tracking — compress that cycle?
What does AI literacy look like across development?
A kindergartener and a high schooler need different things. What does a developmentally appropriate AI literacy progression look like from K through 12?
Measurement & Assessment
Better instruments, faster feedback
Assessment is the bottleneck. If we can measure learning faster and more accurately — especially for new competencies — everything downstream improves: instruction, intervention, and research itself.
Can LLMs accurately estimate item difficulty?
Calibrating assessment items traditionally requires hundreds of student responses. If language models can predict difficulty and discrimination parameters, we can build better tests at a fraction of the cost.
Can paper-based adaptive assessment close the feedback loop without screens?
Billions of students learn without devices. SmartPaper demonstrates that computer vision can score handwritten work — but can we push further toward adaptive sequencing on paper?
How do open psychometric datasets change assessment quality?
Our 34,000+ open items with calibration data are among the largest such collections. Does open access to item parameters actually improve the assessments teachers and researchers build?
AI Tools & Their Effects
What works, what doesn't, what's worth the cost
The market is flooded with AI education tools making bold claims. Rigorous evidence on what actually improves learning — and at what cost — is scarce and urgently needed.
What is the actual impact of AI tutoring on learning outcomes?
AI tutoring products claim transformative results, but independent evidence is thin. We need well-designed studies measuring real learning gains, not just engagement metrics.
Do lightweight AI supplements improve learning more per dollar than full platforms?
A $25 AI-generated curriculum unit vs. a $50/student platform subscription. When is less more? Cost-effectiveness analysis is almost entirely absent from edtech research.
How do AI teacher tools affect instruction quality?
AI lesson planners, worksheet generators, and text leveling tools save teachers time. But do they actually improve the quality of instruction students receive?
Equity & Access
Who benefits, who gets left behind
AI could narrow educational inequality — or widen it. The difference depends on infrastructure, design choices, and whether research itself reaches the communities that need it most.
Does AI widen or narrow the resource gap between schools?
Well-resourced schools adopt AI tools faster. But AI also dramatically lowers the cost of high-quality materials. Which force wins, and under what conditions?
What works in low-infrastructure settings?
Paper-based assessment, offline-first tools, SMS-delivered content. Which approaches deliver measurable learning gains where connectivity and devices are scarce?
Can AI-accelerated research itself be more equitable?
Traditional education research is slow and expensive, concentrating evidence in wealthy contexts. If AI compresses the research cycle, can we generate evidence faster for underserved populations?
How we work
AI-accelerated methods
We use AI to compress the research cycle itself — generating evidence in weeks instead of years, with open data and reproducible methods throughout.
- —Synthetic student simulation for rapid item calibration
- —AI-powered qualitative interviews (text and voice)
- —Paper-based adaptive testing at national scale
- —LLM evaluation across 200+ experimental conditions
- —Open psychometric datasets for reproducible research
- —Cost-effectiveness analysis of AI interventions
Interested in collaborating on research?
We partner with researchers, funders, and school systems.