In just three years, artificial intelligence has revolutionized entertainment, finance, manufacturing and many other industries.
But a new report from the Brookings Institution concludes that when it comes to education, the risks of AI overshadow its benefits.
Researchers interviewed K-12 students, parents and teachers in 50 countries. Their conclusion: AI undermines young people’s foundational development in a way that simply can’t be offset by its productivity advantages.
“The risks we found are things like shortcutting learning so that you have less cognitive development,” said Rebecca Winthrop, who heads Brookings’ Center for Universal Education and is an author of the report.
While it can reduce inequality, providing access to content for an estimated 250 million young people who don’t reliably have it, AI can also amplify it, since free AI tools are the least reliable and accurate.

“It is probably the first time in ed tech history where you have to pay more to have more accurate information,” said Winthrop.
The co-author of a recent book about disengaged teens, she said researchers found that young people spending a lot of time with AI companions are “de-skilling” when it comes to basic human interactions.
In the end, researchers admitted that AI’s rapid evolution puts educators in a bind. They’re operating with little rigorous, longitudinal evidence on the effects of AI when it comes to student learning and well-being. As a result, they say, “None of us, not even AI’s creators, can predict its potential dangers or benefits with complete accuracy.”
Here are four key findings from the report:
1. AI poses risks that undermine children’s foundational development and may actually prevent them from reaping its benefits
Using generative AI undermines young people’s foundational development, researchers found.
At its core, the researchers note, AI is a set of powerful productivity tools now being harnessed most effectively by “professional adults with fully matured brains. They have already developed sophisticated metacognitive and critical thinking skills that undergird their approach to their work.” They also have deep expertise in their fields and the cognitive flexibility that comes with that expertise, allowing them to use AI as a “cognitive partner.”
Not so for young people, who aren’t “mini-professionals.” Their brains are still developing and school should ideally help them practice critical thinking and “sustained engagement with challenging material.”
For most young people, AI isn’t a “cognitive partner” but a surrogate. It doesn’t accelerate their development — it diminishes it via cognitive offloading. The result, researchers say: declining skills across the board.
A teacher tells them, “If students can just replace their actual learning and their ability to communicate what they know with something that’s produced outside of them and get credit for it, what purpose do they have to actually learn?”
A student puts it a bit more bluntly: “It’s easy. You don’t need to (use) your brain.”
2. AI can impede students’ social and emotional development
Kids don’t learn in isolation. Relationships with others —in and out of school — help them develop a sense of well-being. But using AI can undermine their ability to form relationships, recover from setbacks and stay mentally healthy, observers tell researchers.
Young people’s use of AI chatbots — for everything from homework to emotional support, therapy and companionship — has adults worried, researchers report. Nearly one in five teachers worry about AI’s influence on student well-being, even though just 7% of students mentioned chatbots’ emotional harm.
The problem, they say, is that it’s equally possible kids aren’t experiencing emotional dependence — or that they simply lack “the self-reflective capacity” to recognize unhealthy emotional dependence and how it impacts their well-being.
3. AI is already eroding the trust relationships between students and teachers — on both sides
Teachers tell researchers they increasingly doubt that students are producing authentic work — while students think the same about their teachers.
Researchers found a fracturing of trust between students and teachers that cuts both ways. Teachers trust students less when they suspect them of using AI to complete homework. In interviews, 16% of teachers said this erosion of trust is “a significant concern.”
And students also trust teachers less when teachers use AI to create lesson plans and assignments, but aren’t open about it.
More broadly, this development could be undermining students’ trust in educational institutions themselves. “One of AI’s greatest casualties may be the trust that ensures young people have what they need in school to meet their needs and prepare them for the future,” they write.
4. It’s not too late to turn things around
Researchers say that while AI is doing damage, the wounds are “fixable” and that adults “should neither capitulate to these harms nor focus solely on limiting their repercussions.”
The report offers 12 recommendations, including:
- Shifting education away from “transactional task completion” that AI can most easily help students with.
- Co-creating AI tools with educators, students, parents and communities. The researchers suggest that schools create “student AI councils” that can help embed student voice into AI tool design “to ensure their relevance, inclusivity, and pedagogical soundness” before adoption.
- Using AI tools that “teach, not tell.” Winthrop suggested, for instance, using AI to interface with a difficult digital text. “I’ve read this paragraph twice,” she said. “I don’t get it. Can you explain it to me in a different way?” Used in such a fashion, with vetted content, she said, “it can be really effective.”
- Offering AI literacy that helps students, educators, and families understand its capabilities, limitations and broader implications. That includes robust professional development that equips teachers with deep knowledge to teach students about AI.
Winthrop highlighted the National Academy for AI Instruction, created last fall by the American Federation of Teachers. AFT President Randy Weingarten has said that over the next five years it will train 400,000 educators, or one in 10 U.S. teachers, in effective AI usage.
“When I talked to Randi Weingarten about why she did it, she said, ‘We have to be at the table this time,’” said Winthrop. “‘We were not at the table during social media.’”
Winthrop said Weingarten “got a lot of flack” for creating the academy, but added, “I think it’s the right decision.”
This story first appeared at The 74, a nonprofit news site covering education.