The 'New World Order'
BIBLICAL END TIMES COMING:   SOON!
Digital ID Or Digital Prison
 
The New World Order

How AI Is Changing The Way Police Fight Crime

In recent years, AI is being used in all possible fields to simplify tasks and increase efficiency. While AI has been employed in sectors like transportation, finance, energy, and healthcare for some time, its adoption in policing is relatively recent. Artificial intelligence (AI) has the potential to revolutionize the criminal justice system, from the way police investigate crimes to the way courts sentence offenders. It has the capability to address various types of crime, making it a powerful tool for law enforcement. Law enforcement agencies around the world are leveraging AI technology to enhance the effectiveness of their officers. The objective is not only to prevent crime but also to solve it.

Current Applications of AI by Police

AI is still new to the law enforcement community, so its applications have not yet been fully realized. Nonetheless, it’s already making an impact in key areas like surveillance, crime prevention, and crime-solving. With enhanced imaging technologies and object and facial recognition, AI reduces the need for labor-intensive tasks, freeing officers to handle more complex activities. It also may capture criminals that would otherwise go free, and solve crimes that would otherwise go undetected.

Some of the key areas where AI is already being used are:

Facial Recognition

AI-powered facial recognition technology helps police departments identify criminals and missing persons using image data. It offers greater accuracy than humans and saves time for officers by analyzing images and matching faces more effectively. Advanced systems can even identify a single face in a crowd, aiding in the capture of criminals. Close-circuit cameras with facial recognition capabilities are deployed in public areas to identify and apprehend troublemakers. It is also used for surveillance in sensitive locations such as airports and train stations. The positive results achieved through AI in policing have contributed to its increasing adoption.

Surveillance Cameras

AI is being applied to surveillance camera footage to not only recognize faces but also identify objects and activities like car accidents. This helps police monitor large events and detect potential threats. AI can also analyze street footage to identify vehicles based on set characteristics, aiding vehicle-related investigations. Drone cameras equipped with AI capabilities further assist in search-and-rescue efforts.

AI cameras and video technology play a significant role in assisting law enforcement at crime scenes. In cases where the crime scene covers a large area that is inaccessible by foot, AI can provide insights and assist in finding clues.

Predictive Policing

Predictive policing involves using data and statistical models to predict where crimes are likely to occur, who might commit them, and who could be potential victims. It shifts the focus from responding to crimes to preventing them by allocating resources strategically. It helps police target high-crime areas for additional patrolling and surveillance. AI analysis of historical patterns also assists in identifying individuals at risk of committing crimes or re-offending. Predictive policing has been successfully implemented in several countries, leading to lower violent crime rates.

AI tools help law enforcement agencies identify patterns and predict criminal activities that may go unnoticed by humans. Artificial Neural Networks are used to make calculations based on extensive databases, including social media posts, Wi-Fi networks, and IP addresses. AI in policing also aids in detecting crimes related to money laundering and fraud.

Robots

While replacing the entire police force with robots is not imminent, robots are being used for various tasks. Law enforcement agencies employ physical robots powered by AI to perform tasks that are considered unsafe for humans. They can handle mundane tasks and enter dangerous locations to identify potential threats, improving officer safety. Some robots are equipped to detonate bombs, enhancing public safety.

Discovering Non-violent Crimes

AI is effective in spotting anomalies in patterns, making it valuable for discovering non-violent crimes like fraud and money laundering. Banks and law enforcement collaborate to utilize AI in detecting counterfeit goods and bills.

Pre-trial Release & Parole

AI systems aid the criminal justice system in assessing the risk of flight and determining the terms of parole for offenders. These systems analyze complex data sets and assist in efficient decision-making, considering crime data and personal information.

Other organizations using AI to Detect Crime and Report it

For example, delivery companies can use AI to identify prohibited goods in parcels and report them to the authorities. Medicine and retail stores can employ AI solutions to identify suspicious customers, such as those purchasing large quantities of chemicals or substances. Shipping companies can utilize AI to combat human trafficking by identifying containers used for illegal transportation.

Social media monitoring and analysis

AI is being used to constantly monitor social media messages to detect threatening behavior and ensure the safety of users on social media.

Practical Applications of AI Tools being used in Law Enforcement

AI is being used in policing in a variety of ways, including:

Crime-prevention software in the New York City Police Department is called PredPol. It has been credited with reducing crime in the city by up to 10%. This software uses historical crime data to identify patterns and predict where and when crimes are likely to occur. This information can then be used to deploy officers more effectively and prevent crimes from happening.

Facial recognition technology in the United Kingdom is used to identify suspects and track their movements. It can also be used to prevent crime by deterring criminals from committing crimes in the first place. Facial recognition technology is already being used by police departments in over 60 countries.

AI for smart prison systems in Hong Kong and China is used to monitor prisoners and prevent escapes. It can also be used to provide prisoners with rehabilitation services and help them reintegrate into society after their release.

CASE STUDY — I: Trinetra

Problem: The UP Police faced challenges in solving criminal cases rapidly and efficiently due to the high crime rate in Uttar Pradesh. They sought to enhance their efficiency by leveraging technology, including AI.

Solution: In December 2018, the UP Police launched an AI-powered mobile application called ‘Trinetra.’ Developed by Staqu, the application contains a database of 5 lakh criminals, including their pictures, addresses, and criminal histories. It utilizes facial recognition, visual search, machine learning, and deep learning technologies.

Functionality: Trinetra enables police officers to easily register and search for criminals using simple biometric features such as images or videos. It connects to databases of prisons, District Crime Records Bureaus (DCRBs), State Crime Records Bureaus (SCRBs), and the Crime and Criminal Tracking Network and Systems (CCTNS). The application provides real-time access to non-repetitive and non-ambiguous data on criminals active in the state.

Impact: Trinetra has already assisted the police in apprehending a high-profile criminal involved in a shoot-out in Lucknow. The application is set to expand its coverage to 75 districts, 6 Government Railway Police (GRP) units, Anti-Terrorist Squads (ATS), and Special Task Forces (STF). It will be used by over 1,500 police officials, including station house officers, GRP inspectors, and senior police officials.

Future Development: The UP Police plans to introduce additional features in Trinetra, such as vehicle search technology, voice sample search using AI-powered speaker identification, fingerprint-based identification, and active geo-fencing of police personnel. These enhancements aim to further improve the application’s capabilities in criminal identification and investigation.

CASE STUDY — II: Clearview AI

Clearview AI: Clearview AI is a facial recognition firm that has conducted nearly a million searches for US police. The company has amassed a database of 30 billion images scraped from platforms like Facebook without users’ permission. The software is reportedly used by hundreds of police forces across the US. However, many cities have banned its use, including Portland, San Francisco, and Seattle.

Controversial Privacy Practices: Clearview AI has faced multiple privacy breaches and has been fined millions of dollars in Europe and Australia. Critics argue that the use of Clearview’s technology by the police infringes on privacy rights and creates a “perpetual police line-up” by comparing suspect photos to people’s faces without their consent.

Functionality: Clearview’s system allows law enforcement to upload a photo of a face and search for matches in its database of billions of collected images. The software provides links to where matching images appear online, and it is considered one of the most powerful and accurate facial recognition companies globally.

Lack of Regulation: Facial recognition by the police operates with few laws or regulations. The true extent of mistaken identities resulting from facial recognition is unclear due to limited data and transparency. Civil rights advocates call for police to openly disclose the use of Clearview and subject its accuracy to independent testing and scrutiny in court.

Accuracy and Concerns: Clearview claims high accuracy rates, but critics question the reliability of the technology, especially when using images from public sources like CCTV.

Testimony and Legal Use: The CEO stated that Clearview does not want to testify in court regarding the accuracy of its algorithm, as investigators use other methods to verify results. However, Clearview has been used in specific cases, such as finding crucial witnesses and helping in defense efforts. Defense lawyers argue that both prosecutors and defenders should have access to the same technology.

CASE STUDY — III: Revolutionizing school safety with AI technology

License plate recognition (LPR) technology can be integrated into existing security cameras to identify unauthorized vehicles on school premises, potentially preventing dangerous situations.

Facial recognition technology can be used to identify individuals who are not authorized to be on campus, enhancing security measures.

AI-powered virtual assistants can provide real-time emergency information and guidance to students and staff, improving emergency preparedness and response.

Data analytics can help schools identify patterns and trends that indicate safety threats, such as bullying or violence, enabling timely interventions.

AI can assist in monitoring and tracking potential threats, including cyberbullying and online hazards.

Opportunities for AI in Law Enforcement

While artificial intelligence (AI) is not yet being used to its full potential in policing, there are a number of ways that researchers are exploring how AI can be used to improve law enforcement.

Here are some specific examples of how AI is being researched for use in policing:

Biometric identification

AI-powered biometric technology can be used to identify suspects and match them to criminal databases accurately and swiftly.

Body cameras and wearable technology

Integrating wearable technologies with body cameras can enhance threat detection and emergency response capabilities.

Drones and autonomous vehicles

AI-powered sensors and cameras on drones and autonomous vehicles can improve surveillance of public areas.

Natural language processing (NLP)

AI-powered NLP systems can facilitate communication with non-English-speaking communities, reducing language barriers in law enforcement interactions.

Enhance Prison Management

AI can improve prison security, and help in the treatment of inmates with addiction issues. It can also aid in the selection of the best combination of inmates to result in the least amount of conflict.

AI-based dispatch systems for emergency response

AI can automate the process of dispatching officers to emergencies, making the response quicker and more effective. By analyzing data from various sources, AI algorithms can determine the best course of action in real-time.

Traffic Management

AI can help manage traffic patterns and control traffic lights in real-time, enabling efficient routing during planned and unplanned situations. It can also facilitate the movement of emergency vehicles.

Police-Related Citizen Service Delivery

Policing-related services, such as the registration of FIRs (First Information Reports) and investigation of cases, can utilize an AI-based Intelligent Complaint Registration Application. This application could be hosted online or through smart interfaces and employ technologies like Natural Language Processing, speech recognition, and deep learning to streamline the process. By reducing the human factor in service delivery, such tools can help ensure standardized, truthful responses, equal access, and other benefits for citizens.

Risks & Considerations

Today, artificial intelligence is being used by law enforcement for facial recognition and even predictive policing. It can help solve and prevent crimes, but it’s not foolproof. That’s resulted in wrongful arrests and continued racial profiling in policing.

The development and implementation of AI technology have outpaced the creation of laws and regulations to govern its use. This has led to concerns about the impact of AI on human rights.

Lack of understanding and digital literacy: People may not be able to question or challenge the results produced by AI systems. There is a need for a proper interpretation of AI-generated insights.

Loss of privacy: AI has given states the power to create total surveillance states, where individuals can be constantly monitored. This raises concerns about the violation of privacy and other human rights.

Discrimination and bias: AI algorithms can amplify existing social biases due to biased input data, leading to discrimination in predictive policing and criminal justice systems.

Violation of the right to equality: When AI systems are biased, they can infringe on an individual’s right to be treated equally. Predictive tools and risk assessment algorithms may flag certain individuals as high risk based on biased historical data, undermining the principle of “innocent until proven guilty.”

Lack of transparency and fairness: The “black-box” nature of AI algorithms and the reliance on big data sets that may not directly correlate with the crime accused can undermine transparency and fairness in decision-making, infringing on the right to a fair trial.

Accountability: When AI systems are relied upon by police or courts, it raises questions about accountability. If these systems produce biased or unfair results, it becomes challenging to hold anyone accountable for the consequences.

Vulnerability to hacking or manipulation: This high dependency on technology also introduces the additional issue of vulnerabilities to hacking or manipulation.

Key Takeaways

Usage of AI in Analytics:

We have seen how useful predictive policing using AI can be. This is done by using AI in analytics.

AI analytics combines artificial intelligence and machine learning with traditional analytics to generate insights, automate processes, deliver predictions, and drive actions for better business outcomes. It provides a comprehensive view of operations, customers, competitors, and the market, enabling organizations to understand what happened, why it happened, what’s likely to happen next, and the potential outcomes of different actions. AI analytics offers advanced capabilities that go beyond traditional analytics, enabling organizations to harness the power of data and make better-informed decisions for improved business outcomes.

The benefits of AI analytics include enhanced decision-making, improved efficiency and productivity, enhanced customer experiences, and freeing up data teams to focus on strategic initiatives.

Hence, it is important for people from different fields to understand how this technology might benefit them, and make informed changes to their business to incorporate this technology so that they can reap the most efficient outcome.

The top tools for AI-powered analytics that can be used for any business are:

Adobe Analytics uses AI to analyze data from different online and offline sources, then visualize insights from your data.

BlueConic is a customer data platform that turns customer data into person-level profiles.

Crayon is a market and competitive intelligence tool that enables businesses to track, analyze, and act on everything happening in their market.

Google Analytics uses machine learning to surface insights and answer your analytics questions.

Google Cloud’s smart analytics solutions use machine learning to get insights into and make predictions.

Helixa helps you produce detailed personas based on audience interests, demographics, and psychographics.

Invoca is an AI-powered call-tracking and conversational analytics tool.

IBM Watson + IBM Planning Analytics can make predictions across finance, operations, and sales.

AI-related Jobs in all kinds of Companies:

The demand for AI jobs is rapidly growing across various industries, and law enforcement is no exception. The use of AI in law enforcement has been highlighted as a significant improvement in the field. In policing, technology jobs encompass a wide range of roles such as electronic surveillance officers, digital forensic investigators, real-time crime analysts, social media researchers, and accident reconstructions, among others. Platforms like LinkedIn and Indeed display numerous job postings for technology positions in law enforcement, including data analysts, computer forensic instructors, intelligence analysts, research analysts, police business systems analysts, and more.


Also:


How A New Type Of AI Is Helping Police Skirt Facial Recognition Bans

Adoption of the tech has civil liberties advocates alarmed, especially as the government vows to expand surveillance of protesters and students.

Police and federal agencies have found a controversial new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model that can track people using attributes like body size, gender, hair color and style, clothing, and accessories.

The tool, called Track and built by the video analytics company Veritone, is used by 400 customers, including state and local police departments and universities all over the US. It is also expanding federally: US attorneys at the Department of Justice began using Track for criminal investigations last August. Veritone’s broader suite of AI tools, which includes bona fide facial recognition, is also used by the Department of Homeland Security—which houses immigration agencies—and the Department of Defense, according to the company.

“The whole vision behind Track in the first place,” says Veritone CEO Ryan Steelberg, was “if we’re not allowed to track people’s faces, how do we assist in trying to potentially identify criminals or malicious behavior or activity?” In addition to tracking individuals where facial recognition isn’t legally allowed, Steelberg says, it allows for tracking when faces are obscured or not visible.

The product has drawn criticism from the American Civil Liberties Union, which—after learning of the tool through MIT Technology Review—said it was the first instance they’d seen of a nonbiometric tracking system used at scale in the US. They warned that it raises many of the same privacy concerns as facial recognition but also introduces new ones at a time when the Trump administration is pushing federal agencies to ramp up monitoring of protesters, immigrants, and students.

Veritone gave us a demonstration of Track in which it analyzed people in footage from different environments, ranging from the January 6 riots to subway stations. You can use it to find people by specifying body size, gender, hair color and style, shoes, clothing, and various accessories. The tool can then assemble timelines, tracking a person across different locations and video feeds. It can be accessed through Amazon and Microsoft cloud platforms.

In an interview, Steelberg said that the number of attributes Track uses to identify people will continue to grow. When asked if Track differentiates on the basis of skin tone, a company spokesperson said it’s one of the attributes the algorithm uses to tell people apart but that the software does not currently allow users to search for people by skin color. Track currently operates only on recorded video, but Steelberg claims the company is less than a year from being able to run it on live video feeds.

Agencies using Track can add footage from police body cameras, drones, public videos on YouTube, or so-called citizen upload footage (from Ring cameras or cell phones, for example) in response to police requests.

“We like to call this our Jason Bourne app,” Steelberg says. He expects the technology to come under scrutiny in court cases but says, “I hope we’re exonerating people as much as we’re helping police find the bad guys.” The public sector currently accounts for only 6% of Veritone’s business (most of its clients are media and entertainment companies), but the company says that’s its fastest-growing market, with clients in places including California, Washington, Colorado, New Jersey, and Illinois.

That rapid expansion has started to cause alarm in certain quarters. Jay Stanley, a senior policy analyst at the ACLU, wrote in 2019 that artificial intelligence would someday expedite the tedious task of combing through surveillance footage, enabling automated analysis regardless of whether a crime has occurred. Since then, lots of police-tech companies have been building video analytics systems that can, for example, detect when a person enters a certain area. However, Stanley says, Track is the first product he’s seen make broad tracking of particular people technologically feasible at scale.

“This is a potentially authoritarian technology,” he says. “One that gives great powers to the police and the government that will make it easier for them, no doubt, to solve certain crimes, but will also make it easier for them to overuse this technology, and to potentially abuse it.”

Chances of such abusive surveillance, Stanley says, are particularly high right now in the federal agencies where Veritone has customers. The Department of Homeland Security said last month that it will monitor the social media activities of immigrants and use evidence it finds there to deny visas and green cards, and Immigrations and Customs Enforcement has detained activists following pro-Palestinian statements or appearances at protests.

In an interview, Jon Gacek, general manager of Veritone’s public-sector business, said that Track is a “culling tool” meant to speed up the task of identifying important parts of videos, not a general surveillance tool. Veritone did not specify which groups within the Department of Homeland Security or other federal agencies use Track. The Departments of Defense, Justice, and Homeland Security did not respond to requests for comment.

For police departments, the tool dramatically expands the amount of video that can be used in investigations. Whereas facial recognition requires footage in which faces are clearly visible, Track doesn’t have that limitation. Nathan Wessler, an attorney for the ACLU, says this means police might comb through videos they had no interest in before.

“It creates a categorically new scale and nature of privacy invasion and potential for abuse that was literally not possible any time before in human history,” Wessler says. “You’re now talking about not speeding up what a cop could do, but creating a capability that no cop ever had before.”

Track’s expansion comes as laws limiting the use of facial recognition have spread, sparked by wrongful arrests in which officers have been overly confident in the judgments of algorithms. Numerous studies have shown that such algorithms are less accurate with nonwhite faces. Laws in Montana and Maine sharply limit when police can use it—it’s not allowed in real time with live video—while San Francisco and Oakland, California have near-complete bans on facial recognition. Track provides an alternative.

Though such laws often reference “biometric data,” Wessler says this phrase is far from clearly defined. It generally refers to immutable characteristics like faces, gait and fingerprints rather than things that change, like clothing. But certain attributes, such as body size, blur this distinction.

Consider also, Wessler says, someone in winter who frequently wears the same boots, coat, and backpack. “Their profile is going to be the same day after day,” Wessler says. “The potential to track somebody over time based on how they’re moving across a whole bunch of different saved video feeds is pretty equivalent to face recognition.”

In other words, Track might provide a way of following someone that raises many of the same concerns as facial recognition, but isn’t subject to laws restricting use of facial recognition because it does not technically involve biometric data. Steelberg said there are several ongoing cases that include video evidence from Track, but that he couldn’t name the cases or comment further. So for now, it’s unclear whether it’s being adopted in jurisdictions where facial recognition is banned.


Also:


How Policing Agencies Use AI

AI is transforming policing, sometimes in dramatic ways. Face recognition, predictive policing, and location-tracking technologies — once the stuff of science fiction — now are being adopted by law enforcement agencies large and small.

This explainer gives an overview of some of the ways police are using AI to investigate and deter crime, including:

Identifying unknown individuals or verifying their identity;

Tracking people’s locations and movements;

Detecting crime, anomalies, or suspicious events;

Predicting future crimes, perpetrators, and victims;

Analyzing emotions, including deception;

Determining associations between individuals; and

Managing and analyzing evidence.

The effectiveness of some of these tools is unclear, and the inclusion of a tool on this list is not meant to indicate that it performs well. This document also does not evaluate the benefits or harms of these tools, which will be addressed in separate explainers. Rather, this document is meant to explain how police are using AI and give a sense of the breadth of such uses.

Identification

AI systems can be used to identify individuals or verify their identity.

Face recognition is a computer vision technology that analyzes faces in an image. It can be used for things such as face identification (the identification of an individual based on a comparison with a pool of known individuals) and face verification (verifying that a given face corresponds to a specific person — for example, verifying that a person’s face matches the photo on their identification card).

Iris recognition identifies individuals by their iris patterns. A specialized camera takes an image of the boundaries and textures of the iris, then maps the iris image using over 200 distinct features.

Automated fingerprint identification has been in use for decades; now, AI systems can be used to enable better matching even when a fingerprint is distorted or incomplete. AI also is being used to develop new systems which can take a person’s fingerprint without physical contact.

Palm-print identification, like fingerprint identification, is accomplished through analysis of ridges and valleys on the skin’s surface. Some claim that this technique has advantages over face recognition technology — for example, palms have more details to tell one person from another, and it is harder to scan a person’s palm without their consent.

Ear biometrics can be used to identify individuals who are difficult to identify through face recognition technology — for example, due to the individual wearing a mask.

Gait recognition analyzes how people walk in order to identify them. It has advantages over other biometric systems, as it enables the identification of persons from a distance. Notably, accuracy is a serious issue due to the variability of environments and human bodies.

Voice recognition systems are used to determine the identity of a person based on audio of their voice. These systems use specialized models known as acoustic models to process and analyze audio files.

DNA analysis has long been used by law enforcement to identify suspects. Now, AI is being used to improve this process and make it more efficient. New AI-powered forms of DNA analysis are now being developed, such as forensic DNA phenotyping, which attempts to predict externally visible characteristics such as eye, hair, and skin color, as well as the geographic origins of a person’s ancestors.

Tracking

Policing agencies use AI systems to track the locations or movements of individuals.

Tracking algorithms can detect objects and/or individuals in video files and track them across cameras based on the appearance, velocity, and motion of the thing being tracked. This feature could be used, for example, to search stored video footage from a particular neighborhood and identify all the times that a given individual was recorded.

Vehicle-surveillance systems, also known as automated license plate readers, detect information about passing vehicles, such as a vehicle’s color, make, and license plate number. This data can be stored, along with the location and time of capture, thus enabling police to ascertain the locations of vehicles over time. Some agencies now are using drone-based vehicle-surveillance systems.

Detection

Policing agencies use AI to detect crime, anomalies, or suspicious events.

Anomaly detection seeks to identify events or data points that are anomalous — that is, that deviate from what is expected. This technology is widely used by the private sector— for example, by financial institutions to detect fraudulent transactions or by network administrators to detect cyberattacks.

Some vendors have developed systems designed to alert policing agencies to events such as shoplifting, fights, loitering, dangerous driving, and casing a location. At least one vendor is leveraging vehicle surveillance system data to try to identify driving patterns that may be associated with drug trafficking activity or other unlawful conduct.

Gunshot detection systems use a network of outdoor acoustic sensors to detect and locate gunfire and alert police. Policing agencies use gunshot detection systems to reduce response times, in the hope of locating a shooter, getting help to victims, or finding evidence such as shell casings. New systems use two-source detection — sound and flash — to confirm gunshots.

Weapons detection systems are used to identify the presence of weapons. Computer vision-based systems analyze images to detect objects that appear to be weapons. Other vendors use sensors and analytics to detect concealed weapons and identify their location — an alternative to traditional metal detectors.

Drug detection systems are being used to detect drugs on-site using mobile spectrometers, as opposed to sending samples to a lab.

Prediction

Policing agencies use AI to try to predict the location and time of future crime, as well as those who may perpetrate or be the victims of it.

Place-based predictive policing systems use historical crime data to identify areas prone to crime, and at what times. Systems also can analyze geographic features that increase the risk of crime, known as risk-terrain analysis.

Person-based predictive policing systems seek to identify individuals who are at risk of committing crimes or becoming a victim. This can be based on data such as one’s risk factors for violence or becoming a victim, and/or their frequenting high-crime locations.

Recognizing Emotions

Policing agencies are experimenting with AI systems to analyze an individual’s sentiments or emotions.

Lie detection systems claim to track eye movements and analyze micro-expressions to determine whether an individual is engaged in deception. Some systems are designed specifically for law enforcement use.

Sentiment analysis is a natural language processing technique designed to classify individuals’ sentiment as positive, negative, or neutral. Affective computing, which goes beyond sentiment analysis, seeks to understand and interpret specific emotions based on facial expressions, voice intonations, text, and physiological signals. Sentiment analysis/affective computing might be used, for example, to flag problematic police interactions captured on bodyworn cameras for supervisor review.

Identifying Associations

Policing agencies use AI systems to help detect associations among individuals.

Convoy analysis is a feature for Vehicle Surveillance Systems, or License Plate Readers, that identifies vehicles that travel together, and thus presumably are associated with one other. They allow officers to enter a license plate number and search for related vehicles.

Social network analysis tools suggest how individuals are connected in society, visualized through graphs. For example, AI tools have been used to identify alleged associates based on social media data. Machine learning algorithms are used to identify patterns, trends, and anomalies in social networks.

Evidence Management and Analytics

Policing agencies use AI to help agencies find potentially relevant evidence in large datasets.

Automated metadata tagging can automatically tag and label digital evidence, helping investigators to find relevant evidence in the future. Some body-worn camera systems use AI to tag and label videos with relevant contextual information, helping police locate specific events within large video databases.

Evidence matching tools automatically search an agency’s databases to find evidence that might be related to an incident under investigation.

CSAM detection tools detect and flag the existence of child sexual abuse material (CSAM) on devices, helping police to locate such materials and identify victims more quickly.

Transcription tools can be used to transcribe audio automatically from video and audio files. This enables agencies to search for keywords across potentially thousands of videos.