The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox.

From driver-assisted car systems to video games and virtual assistants like Alexa and Siri, artificial intelligence (AI) has transformed almost every aspect of our lives, as our machines learn from the massive amounts of data we provide them.

The goal is for our computers to make humanlike judgments and perform tasks to make our lives easier, but if we’re not careful, our machines will replicate our racism, too.

Kids from black and Latino communities — who are often already on the wrong side of the digital divide — will face greater inequalities if we go too far toward digitizing education without considering how to check the inherent biases of the (mostly white) developers who create AI systems. AI is only as good as the information and values of the programmers who design it, and their biases can ultimately lead both to flaws in the technology and to amplified biases in the real world.

This was the topic at the conference “Where Does Artificial Intelligence Fit in the Classroom?” put on by the United Nations General Assembly, the United Nations Education Scientific and Cultural Organization (UNESCO), the think tank WISE and the Transformative Learning Technologies Lab at Teachers College, and hosted by Teachers College, Columbia University this month. (The Hechinger Report is an independent unit of Teachers College.)

While many argue that the efficiencies of AI can level the playing field in classrooms, we need more due diligence and intellectual exploration before we deploy the technology to more schools. Systemic racism and discrimination are already embedded in our educational systems. Developers must intentionally build AI systems with a racial equity lens if the technology is going to disrupt the status quo.

Related: Don’t say there’s a lack of STEM talent in the South

Previous attempts at making education more efficient and equitable demonstrate what can go wrong. Standardized testing promised an innovation that was irresistible to an earlier generation of education leaders hoping to democratize the system. As Nicholas Lemann put it in his book “The Big Test,” about the development of the SAT, such assessments promised to evaluate “all American high-school students on a single national standard and then [make] sure that they went on to colleges suited to their abilities and ambitions.” Later, standardized tests allowed schools and teachers to be held accountable when students didn’t measure up to expectations.

While many argue that the efficiencies of AI can level the playing field in classrooms, we need more due diligence and intellectual exploration before we deploy the technology to more schools.

But the designers and implementers of these assessment tools didn’t consider how the racism and inequality rife in U.S. society would be baked into the tests if care wasn’t taken to make them more fair. SAT and ACT tests are good proxies for wealth. Overuse of these tests has helped concentrate wealthy people in selected colleges and universities, stifling the inclusion of and investment in talented people who happen to be lower income. The College Board, the nonprofit that prepares the SAT, announced a patch for this problem in May: the planned rollout of an “adversity score” assigned to each student who takes the college admissions exam. The score was to be comprised of 15 factors, including neighborhood and demographic characteristics, such as crime rate and poverty, and to be added to each student’s result However, the College Board retreated from their plan, bending to a wave of criticism.

Current attempts to introduce AI in schools have led to improvements in assessing students’ prior and ongoing learning, placing students in appropriate subject levels, scheduling classes and individualizing instruction. Such advances enable differentiated lesson plans for a diverse set of learners. But that sorting can be fraught with errors if the algorithms don’t consider the nuanced experiences of students, especially those who are starting at the bottom versus the top.

The spread of AI technology can also tempt districts to replace human teachers with software, as is already happening in such places as the Mississippi Delta. Faced with a teaching shortage, districts there have turned to online platforms. But students have struggled without trained human teachers who not only know the subject matter but know and care about the students.

Related: Black teachers matter, for students and communities

Over-zealous tech salesmen haven’t helped matters. The educational landscape is now littered with cyber or virtual schools because ed tech companies promised that they would reach hard-to-educate as well as black and Latino students and create efficiencies in low-funded districts. Instead, many of the startups have been hit by scandal, including a pair in Indiana that were forced to close down.

Yet AI could provide real benefits. AI in the classroom could free up teachers from time-consuming chores like grading homework. It won’t work if it’s intended as a way to avoid the hard work of recruiting enough skilled teachers, especially teachers who look like the kids they’re working with. For the rise of robots to equate to progress, teachers should experience improved work conditions and increased job satisfaction. AI should reduce attrition and increase the desirability of the job. But if technologists don’t work with black teachers, they won’t know what conditions need to change to maximize higher order thinking and tasks.

We must diversify the pool of technology’s creators, incorporate people of color in all aspects of its development, continue to train teachers on its proper usage and build in regulations to punish discrimination in its application.

The true promise of AI is to give us insight into how students and teachers learn — including the racism that keeps needed resources from schools in which the majority of students are people of color. When we better understand how, when and where people learn to be racist, then we can build a justice app for that.

This story about AI was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

Letters to the Editor

At The Hechinger Report, we publish thoughtful letters from readers that contribute to the ongoing discussion about the education topics we cover. Please read our guidelines for more information. We will not consider letters that do not contain a full name and valid email address. You may submit news tips or ideas here without a full name, but not letters.

By submitting your name, you grant us permission to publish it with your letter. We will never publish your email address. You must fill out all fields to submit a letter.

Your email address will not be published. Required fields are marked *