When Border Bias Becomes Training Data: Why AI Poses New Risks for Racialized Travelers

I wasn’t expecting to have a conversation about artificial intelligence at the Canadian border, but that is exactly what happened. The officer asked what I did for work. I told him I work in AI consulting, helping organizations understand how systems learn and how human judgment shapes the outcomes those systems produce.

That is when he brought up something most leaders are afraid to say aloud.

He talked about the new hires. How some of them rely more on instinct than training. How inexperience shows up in who gets searched, who gets slowed down, who gets questioned for just a little too long. Then he told me he was afraid of what happens when automation arrives. Not because it will replace the work. Because it will learn from the blind spots they are still struggling to fix.

He said it plainly. If a few officers make biased decisions today, what happens when the system learns from it tomorrow.

This is not a hypothetical concern.

Border enforcement has long carried racialized patterns in secondary screenings and search rates. Reports from the Canadian Council for Refugees and the Canadian Civil Liberties Association outline consistent disparities for Black, Arab, South Asian, and Indigenous travelers. The United States Government Accountability Office reports similar discrepancies on the U.S. side. These patterns reflect more than isolated mistakes. They reflect the architecture of judgment inside border systems.

Now imagine that architecture feeding an algorithm.

This is the part the officer understood better than many executives I meet.

AI is not neutral. It is a reflection of us. If the inputs come from inconsistent training, uneven judgment, and unexamined bias, the outputs will calcify those patterns into the system. And unlike a human officer who can pause, reconsider, or be corrected, an automated model scales its logic across every traveler, every port, every moment.

Once that logic is embedded, it becomes extremely difficult to unwind.
The people most affected are, predictably, the ones who were already scrutinized.

I told the officer he wasn’t wrong to worry.
He nodded before I even finished.

Humans can grow. Systems cannot unless someone notices the error and intervenes. But BIPOC travelers often sit at the edges of institutional attention. The very groups most likely to be misread by a new officer are the ones most likely to be misread by the model that officer trains.

This is where erasure begins.
Not through violence, but through repetition.

Automation does not fix a biased system.
It simply accelerates its consequences.

What struck me most was how naturally the officer drew the connection. He wasn’t quoting research. He wasn’t working through a technical framework. He was relying on lived experience. He knows how quickly human bias can slip into an interaction. He knows what it looks like when someone in a position of power mishandles a moment. And he knows that if those moments become data, the bias becomes policy.

Borders are more than lines.

They are thresholds where a nation reveals what it fears, who it trusts, and who it misreads. If those signals train an automated system, the stakes rise fast.

The officer’s concern wasn’t abstract. It was grounded in the reality that some travelers already face a harder journey because of how they look, where they are from, or what assumptions others place on them. Data only reinforces what institutions already believe unless someone changes the foundation.

This is the real risk of border automation.
Not that AI will make decisions.
That it will learn from the wrong ones.

And when the consequences affect safety, belonging, asylum, or freedom of movement, the margin for error disappears.

The conversation ended with him thanking me for the work.
But the truth is, he named the problem better than most technologists ever will.

He understood the human layer.
He understood the stakes.
And he understood that when AI cannot see people accurately, it does not just misclassify them. It erases them from fairness altogether.

References

Canadian Council for Refugees. "Racial Profiling and Border Practices in Canada." 2021. https://ccrweb.ca
Canadian Civil Liberties Association. "Bias in Canadian Border Screening." 2020. https://ccla.org
United States Government Accountability Office. "Cross-Border Security: Data on Secondary Screening." 2019. https://gao.gov
Eubanks, Virginia. Automating Inequality. St. Martin’s Press, 2018.
Noble, Safiya Umoja. Algorithms of Oppression. NYU Press, 2018.
Benjamin, Ruha. Race After Technology. Polity Press, 2019.

Previous
Previous

The Quiet Disappearance: How Aging Becomes a System of Erasure in America

Next
Next

When Answer Synthesis Replaces Search, Visibility Becomes the First Casualty