According to a new Pew Research poll, nearly 60 percent of teens believe that students frequently use artificial intelligence platforms like ChatGPT and Copilot to cheat in school. Educators have struggled to discourage the practice. Some have proposed lecturing students on the ethics of cheating. Others are adopting AI detection tools, which can cost school districts hundreds of thousands of dollars and are not foolproof.

If AI cheating continues to rise, we can expect to see further declines in math and reading skills as students offload thinking to automated tools. Failing to address the issue will also create a collective-action problem: students will face pressure to engage in cheating just to keep up with their peers.

Many schools have policies that spell out heavy sanctions for cheating, such as automatic zeroes, parent conferences, and suspension from extracurricular activities. But schools must first be able to detect cheating accurately for these consequences to matter. AI detection tools can vary in their ability to detect AI-written content and in their false-positive rates (meaning they diagnose cheating where there isn’t any). Many schools use Turnitin, a tool specifically designed for educational settings and reported to have very low false-positive rates—but even Turnitin cautions against using it exclusively to find cheating, as it isn’t perfect.

Schools are right to impose severe consequences for cheating, but these punishments can backfire if students are wrongly accused. And the Pew Research poll shows that students also use AI to “edit something they wrote,” “solve math problems,” and summarize reading materials. Many of these activities are beyond the scope of AI-detection tools. They also point to a larger problem than cheating itself—students are outsourcing thinking rather than actually learning.

To combat this, schools need to approach in-person assessment more seriously. This could involve offering smaller, more frequent tests, fewer but larger ones with heavier weighting on final grades, or weighing in-person assignments more than homework. Students might still use generative AI to complete homework or other assignments, but their grades on in-person tests or classwork would suffer as a result. The key is to ensure that stakes are attached to graded work.

Critics argue that “teaching to the test” hinders a teacher’s ability to “indulge students’ curiosity” and creativity. Unions and other test skeptics have also argued against high-stakes exams like standardized tests, claiming that they put more stress on students and increase the share of instructional time devoted to testing rather than learning.

But the unions are wrong about rising stakes in schools. On the contrary, rigor has declined in schools, as demonstrated by rising grade inflation. Though student GPAs and graduation rates have increased, the results on objective, standardized tests have declined. In the era of equitable grading, schools have lowered expectations for students with policies like “no zeroes” on incomplete assignments, offering unlimited retakes of tests, and eliminating late penalties. A 2025 Fordham Institute survey of more than 950 teachers found about half of teachers reported that their schools had adopted at least one equitable grading policy. Middle schools and schools where racial or ethnic minorities make up over half the student population were more likely to adopt these policies.

Equitable grading policies were promoted specifically to narrow racial disparities in grading. Pew Research has found that black and Hispanic teens are more likely to use AI for schoolwork assistance than white students. A larger share of black and Hispanic students, compared with white students, say that they find AI chatbots useful for completing their schoolwork, and that they do much of their work using AI. The combination of lower academic expectations and the convenience offered by AI creates a situation in which AI cheating becomes the rational short-term choice for students.

Schools are not doing students any favors by lowering expectations and minimizing the rigor and importance of tests. AI will only expose the effects of these flawed practices, as we’re already seeing. Schools must return to rigorous grading. States like South Carolina, for instance, are trying to restore fair grading through legislation that would prohibit districts from forcing teachers to give minimum grades higher than what a student actually earned.

Another option is adopting a through-year assessment model, in which states administer exams two or three times a year rather than only one exam at the end of the school year. A handful of states, including Florida, Montana, and Virginia, have made this shift, and many more are considering it. These assessment models are said to provide schools with real-time feedback to inform teaching and interventions and to help parents know whether their kids are learning in school.

Cheating isn’t a new problem, but the rise of AI has intensified it. Without a broad culture change, schools will effectively be rewarding cheaters and punishing those who genuinely care about academic achievement.

Photo by J.Conrad Williams, Jr./Newsday RM via Getty Images

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading