Artificial Intelligence Adoption in Law enforcement

The ROXANNE project has a principal objective to enhance the LEAs’ efforts to discover criminal networks and identify their members. It has capitalised on certain aspects of AI technologies. Speech and language technologies (SLTs), visual analysis (VA) and network analysis (NA) will become the basis of ROXANNE platform, which will enhance criminal network analysis capabilities by providing a framework for extracting evidence and actionable intelligence.


From the Iliad of Homer and the ancient Greek philosophers to the inventions of Leonardo Da Vinci and the Leviathan book of Thomas Hobbes[1], the quest for Artificial Intelligence (AI) has become an integral journey of humanity. Science fiction movies in the first half of the 20th century had made the concept of AI familiar to the general public, while initially in 1950, Alan Turing, though his Turing test[2] and later on John McCarthy in 1955[3], set up the scientific foundations and coined the term of AI, which had been later presented in the Dartmouth conference on artificial intelligence, in the summer of 1956[4]. AI is the ability of a machine or a computer program to think and learn. It presents some similar qualities to the human mind, allowing in that way certain activities to be performed the way a human being would do them[5]. In the 21st century, AI has become a crucial area of research in most of scientific fields, such as engineering, education, medicine, business, finance and accounting, marketing, economics, law as well as security[6].  The main applications of AI include expert systems, natural language processing (NLP), speech recognition, machine learning and machine vision, as well as perception building[7].

Law Enforcement Agencies (LEAs) rely on technology for many parts of their job: identification of criminals, prediction of deviant behaviors, crime scene investigations, tracking of money flows, tagging and defending against fake news are some aspects of daily police work[8] involving AI. According to Dilek et al (2015)[9], AI-based technologies and applications (e.g., artificial neural networks, intelligent agents, artificial immune systems, genetic algorithm and fuzzy sets, etc.) have been currently applied on preventing, detecting as well as combatting crimes online and offline. Having in mind the notion that crime is not random, but it is based on specific patterns[10], routine activities[11] as well as rational decision-making[12], AI technologies enhance critical aspects of police work, in detecting and combatting criminal behaviours. To begin with, the ability to effectively recognise the face of a criminal among the crowd is a critical aspect for LEAs. Surveillance systems (cameras, CCTV-closed circuit television, drone cameras etc.) are tools that have been widely adopted by police officers not only for identifying criminals, but also for reducing violent behaviors as well as for investigating specific crime scenes and detecting suspicious objects, vehicles, weapons etc. AI techniques have been widely adopted by LEAs around the world, influencing the accuracy of existing facial recognition technologies[13]. Using deep-learning methods and facial analysis through specific points and angles, AI offers the opportunity to unmask perpetrators who try to conceal their identity behind scarves, masks, helmets and other man-made objects. Feeding a vast number not only of images (even of low quality), but also of videos as well as of speech and text data from lawbreakers and past suspects into a machine learning software that gradually generates more and more characteristics. In that way, it provides LEAs with sophisticated technology for better surveillance in critical areas with mass gatherings, thus minimising the human error and increasing the accuracy of the final output[14]. The developed AI algorithms can also assist in correlating crime scene clues and objects (taken from police documentation and photographs during the crime scene investigation) with other suspicious objects and weapons under investigation, thus opening possibilities of linking previous offences with current ones[15]. AI advancements are also used to effectively track down suspicious containers and other illegal transportation, often linked with human and/or drug trafficking along with other unlawful activities[16].

Moving to crime prevention and prediction, AI technologies, especially through machine learning, have capitalised on the advance processing and analysis of big data[17], providing LEAs with more accurate outputs, thus gradually enhancing the existing crime prediction techniques[18] (statistical analysis, geographical information system methods etc.). Additionally, they enable the effective response to certain threats. AI technologies are on the edge of being also used by LEAs for crime forecasting, prediction of crime hot spots, of high-risk victims as well as of offenders’ modus operandi and chances of criminal recidivism14. Of course, this specific promising field of crime prediction through AI technologies is largely dependent on the quality and truthfulness of data fed into the system and the respective algorithms, in order not to produce biased or false results.

Having mentioned just some of the aspects of AI technologies adopted by LEAs, it is important to underline the legal and ethical considerations that should always be at the forefront of such technological advancements. AI technologies and developed algorithms should always strive for transparency, robustness against manipulation and continuous evaluation and validation under the prism of societal values and the respect of human rights[19]. Since AI systems are created by humans, they stand the risk of being biased, resulting in “unintended consequences such as privacy violations, criminal liabilities, and reputation risk[20]”. The need to capitalise and further develop a regulatory framework for this type of technology is imperative. The European Commission has already issued specific “Ethics Guidelines for Trustworthy Artificial Intelligence[21]”, according to which trustworthy AI should be lawful, ethical and robust.

The ROXANNE project has a principal objective to enhance the LEAs’ efforts to discover criminal networks and identify their members. It has capitalised on certain aspects of AI technologies. Speech and language technologies (SLTs), visual analysis (VA) and network analysis (NA) will become the basis of ROXANNE platform, which will enhance criminal network analysis capabilities by providing a framework for extracting evidence and actionable intelligence. More in particular, the proposed platform will include five interactive components as follows: i) Speaker Identification (SID) to establish relations between different audio sources; (ii) multilingual Automatic Speech Recognition (ASR) for rapid and accurate speech-to-text processing of raw audio materials; (iii) Natural Language Processing (NLP) to identify entities from multilingual textual input, (iv) Video and Geographical metainformation processing to make use of other visual and spatial information, which may accompany the auditory and textual data, and (v) Network (relation) Analysis (NA) to establish connections of these results, enrich them with data from other available sources and a-priori knowledge, and analyse the final network for sense making of the cases. Working together, these components will provide complementary strengths for case investigation and suspect identification, monitoring and reasoning on crimes' dynamic networks in an automatic and semi-automatic manner. With view to potential legal and ethical issues raised by such technologies, ROXANNE will ensure that the proposed solutions are developed, tested, and validated in full compliance with relevant international and EU legal and ethical frameworks, including innovative approaches to data protection management such as privacy by design.



[1] Nilsson, N.J., 2009. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge: Cambridge University Press, pp. 19-21

[2] Turing, A.M., 1950. Computing Machinery and Intelligence. Mind, 49, pp. 433-460


[4] Moor, J. 2006. The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years. AI Magazine, 7(4), pp. 87-91.


[6] Oke, S.A., 2008. A Literature Review on Artificial intelligence. International Journal of Information and Management Sciences, 19(4), pp. 535-570.



[9] Dilek, S, et al., 2015. Applications of Artificial Intelligence Techniques to Combating Cyber Crimes: A Review. International Journal of Artificial Intelligence & Applications, 6(1), pp. 21-39.

[10] See Crime pattern Theory in Brantingham, P.L. and Brantingham, P.J., 1993. Environmental, Routine and Situation: Toward a Pattern Theory of Crime. In: R.V. Clarke and M. Felson, eds., 2004. Routine Activity and Rational Choice Theory: Advances in Criminological Theory- Volume 5. London: Transaction Publishers. Ch.11.

[11] See Routine Activity Theory in Cohen, L.R. and Felson, M., 1979. Social Change and Crime Rate Trends: A Routine Activity Approach. American Sociological Review, 44 (4), pp. 588-608.

[12] See Rational Choice Theory in Felson, M. and Clarke, R.V., 1998. Opportunity Makes the Thief – Practical Theory for Crime Prevention. Police Research Series-Paper 98. London: Home Office.


[14]Rigano, Ch., 2019. Using Artificial Intelligence to Address Criminal Justice Needs, NIJ Journal 280,          



[17] The term big data usually refers to data that are characterised by 5Vs:  Volume (the size of the data), Variety (data from different sources or in different formats), Velocity (the speed at which data is generated and at which it needs to be available for processing), Veracity (trustworthiness of the data sources), and Value (usefulness) which address the quality and the technical aspects of these data (

[18] Grover V., Adderley R., Bramer M., 2007. Review of Current Crime Prediction Techniques. In: Ellis R., Allen T., Tus on A. (eds) Applications and Innovations in Intelligent Systems XIV. SGAI 2006. Springer, London