In Ukraine, identifying the dead is at the expense of human rights

Five days later Russia launched its full-scale invasion of Ukraine. A year ago this week, US-based facial recognition company Clearview AI offered the Ukrainian government free access to its technology, suggesting it could be used to reunite families, identify Russian agents and fight disinformation. Shortly afterward, the Ukrainian government revealed it was using the technology to scan the faces of dead Russian soldiers to identify their bodies and notify their families. In December 2022, Mykhailo Fedorov, the Deputy Prime Minister of Ukraine and Minister of Digital Transformation, tweet a photo of himself with Clearview AI’s CEO, Hoan Ton-That, thanking the company for its support.

Considering the dead and informing families of the fate of their relatives is one human rights absolutely necessary written in international treaties, protocols and laws such as the Geneva Conventions and the International Committee of the Red Cross (ICRC) Guiding principles for dignified management of the dead. It is also tied to much deeper obligations. Caring for the dead is one of the oldest human practices, one that makes us human, as is language and the capacity for self-reflection. Historian Thomas Laqueur, in his epic meditation, The work of the dead, writes that “as far back as people have discussed the subject, caring for the dead has been considered fundamental – of religion, of the polity, of the clan, of the tribe, of the capacity to grieve, of an understanding of the finiteness of life, of civilization itself.” But identifying the dead using facial recognition technology uses the moral weight of this kind of concern to allow a technology that raises serious human rights issues.

In Ukraine, the bloodiest war in Europe since World War II, facial recognition may seem like just another tool used for the grim task of identifying the fallen along with digitizing mortuary files, mobile DNA labsAnd excavate mass graves.

But does it work? Ton-That says his company’s technology “works effectively regardless of facial damage that may have occurred to a deceased person.” There is little research to support this claim, but authors of one small study found results “promising” even for faces in a state of decomposition. However, forensic anthropologist Luis Fondebrider, former head of the ICRC’s forensic services who has worked in conflict zones around the world, questions these claims. “This technology lacks scientific credibility,” he says. “It’s definitely not widely accepted by the forensic community.” (DNA identification remains the gold standard.) Forensics “understands technology and the importance of new advancements,” but the rush to adopt facial recognition is “a combination of politics and business with very little science,” Fondebrider believes. “There are no magic solutions to identification,” he says.

Using an unproven technology to identify fallen soldiers can lead to mistakes and traumatize families. But even if the forensic use of facial recognition technology were supported by scientific evidence, it should not be used to name the dead. It’s too dangerous for the living.

Organizations eg Amnesty International, de Electronic Borders Foundationthe Surveillance Technology Oversight Project and the Immigrant Defense Project have declared facial recognition technology a form of mass surveillance that poses a threat to privacystrengthens racist police workthreatens the right to protestand can lead to wrongful arrest. Damini Satija, head of Amnesty International’s Algorithmic Accountability Lab and deputy director of Amnesty Tech, says facial recognition technology undermines human rights by “reproducing structural discrimination on a large scale and automating and entrenching existing social inequalities”. In Russia, facial recognition technology used to quell political discord. It does not comply with the law And ethical standards when used in UK and US law enforcement, and is armed in return for marginalized communities all around the world.

clear view AI, which sells its wares primarily to the police, has one of the largest known databases of facial photos, at 20 billion images, with plans to collect another 100 billion images – equivalent to 14 photos for every person on the planet. The company has promised investors that soon “almost everyone in the world will be identifiable”. Regulators in Italy, Australia, the UK and France have declared Clearview’s database illegal and ordered the company to remove their citizens’ photos. In the EU, Regain your facea coalition of more than 40 civil society organizations has called for a complete ban on facial recognition technology.

AI ethics researcher Stephanie Haas says Ukraine is “using a tool and promoting a company and CEO, who have behaved not only unethically but also illegally”. She suspects it’s a matter of “the end justifies the means,” but wonders, “Why is it so important for Ukraine to be able to identify dead Russian soldiers using Clearview AI?” How is this essential to defend Ukraine or win the war?”

Leave a Reply

Your email address will not be published. Required fields are marked *