NFI staff member writes international handbook on the application of artificial intelligence in forensic analysis
US police recently arrested a woman who was eight months pregnant on suspicion of carjacking as her two children looked on. Her face had been wrongfully identified by artificial intelligence. “This is one recent example of what can go wrong when artificial intelligence is used in forensic investigations”, says Zeno Geradts, forensic examiner with the NFI and professor by special appointment at the University of Amsterdam. Geradts is the author of the book ‘Artificial Intelligence (AI) in Forensic Sciences’. The many controversies surrounding the use of AI were what prompted Geradts to write the book. “When things go wrong, it is never the fault of AI itself, it is down to the way it is used.”
Geradts teamed up with his Norwegian counterpart Katrin Franke of the Norwegian University of Science and Technology and the Center for Cyber and Information Security (CCIS) to write the book. It is a guide on how artificial intelligence can be used responsibly and effectively in forensic investigations. The book is aimed primarily at students, forensic examiners and members of the legal profession who are keen to find our more about AI and its applications. But it also makes fascinating reading for anyone who takes an interest in the topic.
Responsible use
Many of Geradts’ NFI colleagues contributed to the book, which outlines the rules and guidelines for responsible use. The NFI, for example, uses AI for facial and speaker comparisons, examining bullets and cartridge cases, identifying deepfakes and exposing hidden messages embedded in photos and video files by criminals. Geradts says investigators should always validate the use of artificial intelligence based on the data they want to use it for. “They need to be alert to any deviations in the system and aware of the system’s limitations. And the results should always be subject to a human check.” You train a facial recognition algorithm by feeding it lots of examples. “If a system is mainly fed images of white men, it may not be as accurate at identifying men or women of colour. That will soon become apparent if you validate the system using the dataset you plan to use it on.”
Opportunities
Geradts’ message is clear: as long as AI is applied wisely, there is no need for concern. “The opposite in fact. The responsible use of AI opens up opportunities for forensic science. This technology is more objective than humans. AI doesn’t have good and bad days, which cannot always be said of us humans.” However, Geradts is keen to warn against inappropriate applications, like the example of the pregnant mother arrested for carjacking: “Incidents of this kind can cause people to lose confidence in AI.” That would be a great pity, Geradts believes, in a field in which technology can really help humanity move forward.
Black box
One aspect of AI that many people find worrying is referred to as the ‘black box’: our inability to explore AI’s inner workings and find out exactly how the model works. A great deal of research is being done with a view to making AI explainable, Geradts observes. But for models that are extremely large and complex – ChatGPT being the best-known example – complete understanding is simply not feasible. Even so, Geradts does not see that as a reason not to use this type of AI. “The human brain can be seen as a black box too.” We don’t know exactly how a human brain arrives at its conclusions either. “Past experiences, associations and sometimes emotions, they all play a part in influencing our judgment. In the case of AI, it is not always possible to go back and trace how a system was trained. Sometimes there are commercial reasons for that. In other cases, the information is simply irrecoverable. The essential thing is to validate the system you use beforehand using reliable data.”
First non-Americans to write an AAFS guide
The book is published by the American Academy of Forensic Sciences (AAFS), an association of thousands of forensic scientists from more than 72 countries. This is the first time in the Academy’s history that a guide for a forensic discipline has been written by scientists from outside the Americas. In Geradts’ view, that is no coincidence: “When it comes to the application of AI in forensic science, the Netherlands is at the forefront.” He has an explanation: “In the US, the legal system involves trial by jury, and a jury is made up of ordinary citizens. In the Netherlands, court cases are heard by a panel of judges with specialist training. They are used to dealing with evidence, and they know how to interpret it. This enables us to educate judges so that they understand what we are doing, why we are doing it and what safeguards we have in place. That has made it easier for us to implement AI.”
Developments
Does this mean that a case like the pregnant mother arrested for carjacking could never happen in the Netherlands? “The biggest mistake you can make is to claim that a system is infallible”, Geradts says. “Human beings make mistakes and so does artificial intelligence. The key is to be aware of the limitations and to have built-in checks and balances to minimise the possibility of errors. At the NFI, I think we have the right checks and balances in place. We validate the systems and the systems support the human beings.” Developments in AI are taking place with such speed that Geradts can imagine having to update his book in the future. “But the fundamental principles will remain the same”, he says.