When Mike Lawrence joined ABC Unified School District as director of information and technology two years ago, he inherited a set of guidelines on the district’s approach to artificial intelligence tools.
The next step was opening up the conversation to the broader education community, said Lawrence, so the district posted the draft document online and invited comments. Many people responded and continue to engage. Each quarter, the district hosts community roundtable discussions on the topic.
“There’s just been a long series of decisions that have to be made for the well-being of students, for the betterment of students, balancing out the danger that they might face while preparing them for the future,” Lawrence said.
It’s a balance that Lawrence and the district look to strike with AI tools. ABC Unified, which has 18,000 students in the greater Los Angeles area, uses Gemini, Google’s artificial intelligence for students 7th through 12th grade. It blocked ChatGPT for students. Other AI programs, like Brisk for teachers and Snorkl for students, are available. It has created guidelines about AI use in the classrooms but leaves it to teachers to implement as they see fit.
Throughout California, school districts have struggled to determine how or whether to implement artificial intelligence tools as they have increasingly entered the education space. While some see AI as a way to enhance learning and reduce teachers’ workloads, others warn it could undermine critical thinking and compromise student privacy.

Amid the debate, some district leaders and education researchers urge caution. Rather than rushing to adopt artificial intelligence tools, they recommend that schools test vendors, establish strong contracts that protect student data and gather feedback from the school community.
The district also created something called a “transparency badge” system that labels whether AI was used in any document, such as emails sent to families from the district, teachers’ communications to students and even students’ work. One badge, “AI Collab,” short for collaboration, means the writer used AI for about 60% of the work created. The “HI” badge, short for human intelligence, means that AI was not used at all.
“Teaching and learning has always been, and will always be in terms of how humans learn, a social process that’s embedded in relationships,” said Rebecca Winthrop, director of Center for Universal Education at the Brookings Institution, the think tank.
Yet, despite how widespread AI use has become, particularly as it’s increasingly embedded in social media platforms, Winthrop has found that “families have not been brought to the table very much at all in this conversation.”
Winthrop’s team recently found that the potential risks of AI use among children currently outweigh the benefits. Their report makes multiple recommendations, including that teachers and others collaborate on the creation of AI tools for the classroom and have a say to ensure student safety.
Understanding AI’s limitations
The approach resonates with many education researchers.
“I want folks, in a lot of ways, to stop being excited about AI and to just see it as something boring,” said Stephen Aguilar, associate professor at the USC Rossier School of Education. Getting to that point would mean that educators and other school leaders shift toward “understanding AI as a constellation of tools, a series of tools, and then they can decide what problems those tools are going to be used to address.”
This formed part of the thinking behind ABC Unified’s “transparency badges,” which Lawrence described as a “useful transitional tool.”
“We feel it’s the way to move us forward,” Lawrence said. Moving forward, however, has also meant knowing where to avoid the use of AI, he said.
For example, ongoing discussions about AI use have led the district to pass on using AI cheating detection programs due to their tendency to create false positives and showing bias against students learning a second language.
“If pressed I say, ‘The best AI detector is you. You’re the teacher and you have the ability, even if it’s new in the year, you can gauge whether this is student work or not,’ ” said Lawrence, who is a former English teacher.
Early discussions about the limitations of artificial intelligence tools is critical in mitigating risks, researchers say.
Aguilar suggests that education leaders considering how to implement an AI tool should ask themselves, “How many mistakes and what types of mistakes are we willing to accept in service of this broader goal” of personalizing learning or supporting teacher prep work?
“Oftentimes, unfortunately, that conversation isn’t how we start. It’s how we end when something goes wrong,” said Aguilar, who is also associate director of the USC Center for Generative AI and Society.
‘This is a journey; not a one and done’
In 2024, Los Angeles Unified rolled out Ed, a chatbot designed to act as a “personal assistant” for students. But the rollout quickly unraveled. Within three months, the district shut down the tool after AllHere, the company that helped develop it, began to fall apart.
“What tends not to [work] is, without throwing them under the bus again, sort of the approach that LAUSD had which was: let’s get out there quickly, let’s make it splashy, let’s make sure that we scale immediately,” Aguilar said. “This gold rush sort of mentality of: ‘we have to be first, and we have to be most.’ That tends to go poorly.”
At ABC Unified, the district continues to discuss evolving AI tools with the community at the quarterly roundtables.
The district has a task force on AI that includes teachers but, as Lawrence noted, “people that sign up for an AI task force are generally going to be AI positive.” Lawrence is seeking feedback from teachers beyond the task force.
“I felt that we were creating our own echo chambers, and I wanted to get outside of the echo chamber, and I wanted to hear from the teachers that hate it,” he said.
Just last week, he heard from several history teachers who are cautious about AI. At least one of those teachers not only avoids using AI themselves but also wants students to turn off all their devices.
“You get a range of perspectives but that feedback, hearing it first-person and restricting myself from speaking, and just listening and taking notes was a really helpful exercise,” said Lawrence.