Introduction

This scenario focuses on AI-powered predictive policing in the year 2025. It considers the issues such applications raise, their impacts and how policymakers especially should respond to those prospective impacts.

The structure of this scenario differs somewhat from the others in that it starts with a vignette which is then dissected in the sections that follow. So it starts…

Vignette

In 2025, many police forces across Europe are adopting predictive policing technologies in response to cuts in human resource budgets. Such cuts inevitably led to a rise in crime rates. Many law enforcement authorities (LEAs) began experimenting with different predictive policing technologies as a way of cutting crime before it happens. After some false starts, such technologies have evolved as remarkably as facial recognition technologies. Smart information systems, notably artificial intelligence algorithms, are within the reach of all European LEAs, who now can feed such systems with the vast swathes of data to which they have access. In a manner that is both intelligent and provides usable information in real time, LEAs have been experimenting with different applications. Some of these have been developed in-house by the national forces, some have been developed through the European Commission’s Horizon Europe research programme, but many are the result of collaborations with private sector players. In some cases, these private initiatives include or result in proprietary data of benefit to the private sector partners.

As one would expect, some approaches and technologies for predictive policing have proven to be better than others. The intelligence-led policing approaches trialled by Pol-Intel in Denmark have served as models of police access to and use of many disparate data sets. The more ambitious applications go beyond accessing data to using those data to make predictions regarding incidents of future crime. Most predictive policing applications have drawn on location-based data to define increasingly localised “hot spots” on which the police should focus attention at particular times, while others draw on personal data to identify likely and repeat offenders. Other applications aim to predict likely and repeat victims of crime in cases such as domestic violence, or those at risk of becoming offenders in the future. Still other predictive policing applications have turned their attention from visible street crime to the less visible white-collar crimes, including money-laundering, tax evasion, fraud and cybercrime. Some researchers are using these technologies to draw together demographic, census and other social data to determine what factors are most likely to induce someone to commit crime. The answers to such questions are expected to make possible early, large-scale interventions where communities and/or individuals are at risk.

Predictive policing applications must have measurable success factors. Typically, this is a matter of rising or falling reports of crime, but this is an unstable metric. At its heart is a mere correlation, which doesn’t prove a causal link between the application and the number of reports. Hence, a decline in reported crime might have come about through using the application, but it might equally be a result of demographic changes. It is possible that reliance on the application has reduced the efficacy of police responses such that many no longer bother reporting crimes as they know that the reports won’t be acted upon. Equally, some applications have been reported as helping the police determine which crimes are worth a response. In some areas, thanks to local press reporting, it is widely known that burglaries will usually not merit a police response, and so actual burglaries have increased in number while the number of reported burglaries has declined. On the other hand, the applications may be so successful that police are effectively anticipating crimes and arriving in time to deter the potential criminal from carrying out his or her plans. This is plausible given efforts to streamline the online reporting process, itself aided by data analytics and AI allowing for a smooth and fast process for victims and other to report crimes.

While some of the public feared a move to “Minority Report” policing, in which a computer informs police who is about to commit a crime and then that person is arrested moments before the act, this has not happened. Indeed, the police are adamant that any computer prediction regarding likely crime hot spots or offenders is fed as information to a team of analysts who then combine that data with other information before advising patrols. This prevents policing by algorithm from becoming the norm. However, cuts in police funding have reduced the number of available analysts, and the remaining analysts have been noticing that the number of false positives (indications that a crime will occur in an area where no crime takes place) is falling with each year and worry for the future of their jobs. In 2020, there was only one information analyst working for the LA Police Department. Furthermore, budget cuts have pushed many officers with good local knowledge into early retirement. New officers, lacking this knowledge, are content to rely upon the predictive policing system. This has led to fears of automation bias in which officers trust the system despite evidence to the contrary and despite the training, introduced in 2020, to rectify this. Nonetheless, there remains a tension as to how best to act when the system recommends one course of action and the officer disagrees with this recommendation, leading to some complaining that they are being treated as robots.

International comparisons do not end with the numbers of analysts. Many cities in the United States have been aggressive in pursuing predictive policing, particularly after funding was increased shortly after Trump was re-elected in November 2020. Incarcerations have increased, but there is no sign of a change in the demographic composition of the prison population, which is overwhelmingly African-American. China has also been aggressive in developing predictive technologies following the widespread integration of the Social Credit System which incorporates all data on a person, including bank records, medical records and educational attainments. Facial recognition on CCTV is now standard in most Chinese cities, although there is insufficient recognition by the Chinese authorities of the problem of false positives. The general approach is one of “better safe than sorry”, leading again to a suspected (albeit unreported) rise in the prison population. Owing to Chinese information sharing protocols, it is also not certain what the ethnic composition of that population looks like, but there are reports that some communities such as the Uighurs have been all but decimated in recent years as they are arrested on the basis of a likelihood of committing a crime. Finally, efforts at introducing predictive policing in some South American cities, such as Bogota in Colombia, have exacerbated perceived biases as the focus remains on preventing crimes against the wealthy, while police ignore victims from less affluent areas.

Figure 11 CCTV – Image Credits: Niv Singer, Flickr (CC BY-SA 2.0)

Europe has been slower than China and the US in adopting predictive policing technologies, partly owing to the human rights frameworks such as the European Charter for Fundamental Rights and the European Convention on Human Rights, both of which are reinforced by laws such as the General Data Protection Regulation, which is still seen as an effective means of regulating the use of personal data across society. The so-called EU Police Directive, however, gives LEAs more flexibility in processing personal data. This regulatory framework combined with the Horizon Europe research programme, begun in 2020, boosted funding for counter-terrorism efforts and cybersecurity.

While there has been investment in police use of these technologies, criminals have not been idle. LEA cyber sleuths have uncovered applications used by criminal gangs to predict where the police will be at any time of the day or night, often drawing on the same data sets used by the police, made public in the name of transparency and democratic accountability. Others have been found on the dark net offering significant sums to hackers who can reverse-engineer police systems to indicate which parameters are used to predict crimes in order that they can better avoid detection.

The police find themselves caught between a rock and a hard place. The press is critical of any reports of rising crime and cynical of reports to the contrary. The police do not need to be reminded of their duty to do all that is reasonable to prevent crime, but the debate within society as to what is reasonable, including which databases can be routinely accessed, rages on with an apparently fickle public opinion swinging wildly in polls depending on the latest scandal.

Question

Drivers

Various drivers have impelled the development of technologies used in predictive policing in 2025, among which are:

Resources

Ever tighter squeezes on funding have led to a decline in the number of officers over the past decade while investment in technology has increased.  AI is often treated by politicians as a panacea to limited public funds. There is some dissension in the ranks, as many officers can see that while the police budgets are shrinking, the technology firms developing AI applications seem to be thriving. If police budgets for human resources have been declining, the quantity and quality of data processed by the police has not. In fact, there is now so much data available from so many different sources that the police would be overwhelmed by it all were it not for artificial intelligence.

Public perception

Given the increased data available, there is a concern that the police miss intervening in cases where they had the relevant information in advance but did not process it in time. This is widely seen as a dereliction of duty that no Chief Constable wants to see on her watch. The public view of the police is ambivalent at best and there is a high level of expectation on the police and their use of technology. After all, if a member of the public can prove that his phone is in his neighbour’s house through using tracking apps like Prey, he wonders what is there to stop the police from entering the home and retrieving the phone? His reasoning leads him to conclude that the police are either unwilling to help him or that they are hopelessly out of date.

International and technological factors

As noted above, Europe has been less aggressive in employing predictive technologies than other countries, notably the US and China, which have considerable resources and public support to invest in these technologies. Many European data scientists have already migrated to one of these countries to work on systems that receive minimal attention from European politicians These data scientists opine that we must follow where technology leads. This divestment of talent, coupled with the mixed results of the Horizon Europe research projects, has led some European police forces to buy technologies from US and Chinese companies, although they are uncomfortable with the fact that these were likely developed in a manner not consistent with European law. Furthermore, there is the ever-present fear that US or Chinese intelligence agencies will infiltrate these systems through backdoors to spy on their European counterparts.

The last few years have seen a remarkable proliferation of AI ethical frameworks sprouting up everywhere. While these may not actually improve practice – because they are naïve, weak, compatible with authoritarian practice, or just used as fig-leaves — nevertheless they serve as a driver because some police forces are investing in data scientists, while others are developing their own predictive technologies in-house.

Questions

Barriers and Inhibitors

While there have been several drivers pushing the development of predictive policing technologies towards their current state in 2025, this development has not always been straightforward.  There have been hurdles that have impeded progress. These have included:

Social factors

Media coverage of increasing use of technology was rarely positive and, while the intended target was often politicians, it was the police who suffered from the adverse coverage. In particular, the press noted the lack of change in the demographics of those arrested and imprisoned. While some have argued that a turn to computerisation in detecting and predicting crime would lead to greater objectivity, this appears not to have been the case.

Even where the predictive capacities of the applications have been more effective, these were met by the equal capacities of criminals who were able to emulate the predictive tools and hack into them directly. This has become part of the continuing escalation of methods used by the police and criminals to stay one step ahead of each other. Most applications are in a constant phase of beta-testing as by the time they are sufficiently stable to be rolled out on a wide basis their method has been cracked and they are no longer as effective.

There has been some marked resistance to change from within the police forces themselves. Some of this has been resolved through generational change as the post-millennial generation who grew up on smart phones have come of age and started to enter the workplace, but some resistance remains.

Other factors have been disrupting LEAs. Some LEAs have lost a quarter of their staff through retirement in the last five years. Such big losses have prompted senior officers to consider more carefully the work force they want for the technological challenges of the 21st century.

Economic factors

Resources have been a driving factor in the development of predictive applications but, paradoxically, they have also held back some aspects of development. There has been a chronic shortage of computer scientists developing tools, and a shortage of analysts with the abilities to effectively use those tools. This is largely due to the inability of the public services to compete with private organisations, especially those working in similar areas of technology in other countries. Limited funding has also led to less reliable datasets and tools than would be ideal, with the result that their accuracy and efficiency sometimes leaves a lot to be desired. Despite this, for some, an 80% conviction rate is good enough, and many are becoming increasingly over-reliant on the systems that have led to a positive (although not a virtuous) feedback loop.

Even if they are convinced of the efficacy of AI supported predictive policing, a major inhibitor for LEAs is finding data analysts and scientists. The big five are scooping much of the available talent. Some LEAs are trying to overcome professional shortages by partnering with universities and taking PhD students as interns. The EU and Member States are well aware of the shortages of talent and, as a consequence, some MS have established national AI programmes aimed at cultivating data analysts and scientists.

Political factors

The lack of funding is due to continued attempts to rein in public spending in the post-2008 world. Some politicians worry about the press drubbing them and the police for arresting people for crimes they haven’t committed yet. Some sceptics criticise the lack of effective and convincing metrics demonstrating the success of the technologies.

Legal and regulatory factors

To ensure police accountability in the use of data analytics and their big databases, parliament adopted laws and regulations that, among other things, made explainability the default mode for algorithms. Politicians had to balance concerns about individual privacy and data protection with the efficacy of police operations. The police were concerned that excessive transparency would give criminals better insight into police methods and, as it turned out, police concerns were justified. Consequently, a committee of the European Parliament has been investigating and debating whether algorithms developed for or used by LEAs should be compelled to have the same standards as others if organised crime benefits from the tiniest scrap of information.

One solution to the stricter regulations imposed by Brussels and national governments on artificial intelligence has been the outsourcing of some technologies to private companies. Without incentives, these companies only complied with the minimum requirements of the law, to the chagrin of many LEAs who knew these companies should be doing more to help them in the fight against organised crime. The press saw this outsourcing as having the effect of blurring the borders between policing and the corporate world even more than was already the case in the early 21st century.

Questions

Ethical, legal, social and economic impacts

In 2025, the benefits of predictive policing technologies are starting to be felt, even though there is still considerable public discussion as to whether these are strictly attributable to the technologies or other factors. Nonetheless, their use has been part of a marked shift in society as noted below:

Ethical impacts

Older police officers resent the tighter constraints on their actions compared to when they started their careers. They feel the so-called “smart” information systems that tell them where to go and what to do, are undermining their own skills, experience and talents in responding to crime. Older policemen don’t seem to recognise how organised crime has shifted away from street crime to more high value crime in money-laundering and cybercrime. At the same time, there is clearly greater accountability and transparency in policing as bodycams record every move of every officer and individual officers are frequently held to account over why they did or did not intervene in a particular situation.

Figure 12 Police investigations

Civil society organisations protest that predictive policing technologies are an affront to Europeans’ fundamental rights. There is much debate within police ranks and others about whether when a police officer responds to an algorithm that has 80% predictive capabilities, she is infringing on a person’s civil rights by treating him as a suspect on the basis of a statistical calculation rather than his doing anything to warrant suspicion. At the same time, if she fails to act on the prediction, is she thereby failing to uphold the civil rights of potential victims? She has no misgivings: her system justifies her suspicions because the suspect has committed crimes previously.

This fallaciously assumes that statistical calculations don’t apply to things people have done. Whether or not something warrants suspicion depends on how highly correlated/causally correlated it is with the commission of a specific crime and, at the same time, how little it is correlated with innocuous behaviour. Most if not all current predictive techniques rely very heavily on crime and police data (e.g. arrests etc) which are about suspicious things people have done. Big data in policing is still in its infancy. One upshot of the current reliance on police data is that those with a profile in a police database are much more likely (even, the only ones) to be identified as a future threat. This creates a ratchet effect for those in the system. It also means predictive techniques are not able to detect first-time offenders. This makes people with no record easy targets for exploitation by criminals. It is also bad for domestic abuse homicide victims, whose perpetrators often have no record.

More positively, prior to the implementation of predictive technologies, individuals were already being stopped and searched, and arrested, sometimes for spurious reasons. The aforementioned increase in accountability has shed light on discriminatory stop-and-search practices. Overall, predictive policing technologies have reduced some discriminatory practices and embedded others, such as an algorithm that focuses more on street crime than corporate malfeasance.

The public discussion that accompanied the widespread introduction of these technologies helped ensure that the explainability regulations in Europe were fair, ethical and sensitive to privacy concerns. Public pressures led to the establishment of independent oversight bodies in the Member States to monitor police use of smart information systems.

While media attention has focused on the police use of predictive applications, some members of the fourth estate have focused on corporate responsibility. Since social media giants collect reams of data, they are frequently able to identify child sex offenders or people involved in domestic abuse. However, this information is rarely turned over to the police. Questions are being asked in national legislatures about the social responsibility of these organisations.

Ethical issues have risen high on policy agendas within LEAs themselves as well as in their oversight bodies. LEAs recognise that to improve trust with the public, they need to be more transparent about their priorities and how they operate. Similarly, progressive LEAs expect the AI systems they use to be explainable and not simply black boxes. In other words, the AI systems used by LEAs should be capable of interrogation, should explain their purpose and whom to contact for more information.

Questions

Legal impacts

A key problem with the development of legal and regulatory frameworks in keeping up with technological development is that policy and lawmakers often do not understand the technologies. Technological development is happening faster than the passage of laws and has been impeded by the time lawmakers need to understand recent developments and the subsequent legislative process. The GDPR, which came into effect in 2018, remains generally fit for purpose regarding personal data, but with the aggregation of databases, it is increasingly rare to find data that cannot in some context or manner be used to identify a living person. The most applicable legislation for LEAs remains the Police Directive, which has meant that LEAs did not need to seek informed consent when they were investigating persons of interest. With so many AI-powered applications available online, prohibitions against automated decision-making affecting the rights of data subjects have become impossible to enforce except in a few high-profile cases like those against Google and Facebook in 2020-21.  That so many enterprises see that it is impossible to enforce some provisions of the GDPR has had the predictable consequence of diminishing trust in the law even from law-abiding companies and citizens.

Questions

Social impacts

Criminals seek advantage over LEAs by exploiting new technologies before the police are able to put counter-measures in place.  The nature of crime is changing. The police have been shifting their focus from street crime, which is particularly subject to some of the blunter forms of predictive policing technology, to organised crime and white collar crimes, including money-laundering, fraud, online scams and hacking.

While organised crime gangs are aware of predictive policing technologies (which receive a lot of attention in the newspapers), the public generally has a low understanding of such technologies and their possible negative impacts. The public is bombarded with so much information (and disinformation) about new technologies that the public has become jaded. The powers of new technologies have ceased to spark wonder. The majority of the public accept these measures as just part of the cost of living. The public has already learned to cope with the substantial levels of surveillance in society – on the streets and in cyberspace. Some people claim that they have altered their behaviour, to appear as conformist as possible, as these days, they do not know what will land them in some police database. Better to play it safe.

Questions

Security and economic impacts

We have already noted the savage cuts in police budgets, also of note is the shift in budgetary priorities from police officers to more data analysts. As the number of officers falls, so the reliance on AI grows, and as the reliance on AI grows, so the same work (or at least similar) is apparently achieved with fewer officers, and so funding declines further. One solution has been to outsource certain tasks, such as facial recognition, to the private sector, as the US has done for several years. However, outsourcing has largely been discredited.

Questions

Mitigating the negative and acting on the positive impacts

For some people, predictive policing was an easy sell. While civil liberty organisations still complain about the bias in algorithms, the public are wary – neither trusting, nor distrusting, but conscious that crime rose several years in a row with cutbacks on police officers.  Predictive policing was touted as the artificial intelligence that was going to make huge cuts in crime – which, of course, has not happened as organised crime gangs have upped their game too.

Politicians, recognising the need to boost their trust with the public, agreed to adopt a new regulation making algorithms explainable to the public. Each algorithm was to include a bit of code saying who created the algorithm, who paid for it, its purpose, website and contact for more information. This dispelled concerns about the police wanting to keep their black boxes black, as it were, but led criminals to a better understanding of police methods and tactics and to a spate of hacking attacks on police systems. Meanwhile, some “grey hat” hackers attempted to improve the algorithms to help eliminate bias.

A significant factor in gaining public acceptance was the establishment of trusted independent national bodies to oversee police use of algorithms in predictive technologies. Adequately funded (for a change!) and staffed with known and respected figures such as Baroness Lawrence in the UK, these independent bodies helped to build trust in the police system. These bodies looked at not only the algorithms themselves, but all aspects of police use of data. They considered what data was collected, the purpose of its collection, how the data were processed and storied, and its eventual usage (including secondary use).

The findings of these bodies were, in the early days, significant in developing crucial training programmes for the police about the new technologies and their limitations. So new police officers are concerned about automation bias, regulations in 2025 spelled out what the police were permitted to do with data. Politicians and senior police officials communicated these rules effectively to the public. They hosted regular stakeholder engagement meetings with the public to ascertain their concerns. Local police forces have also been hosting local meetings with residents and community leaders to explain their use of new predictive policing technologies, how these technologies were vital in offsetting the cuts in police staff numbers and, importantly, how accurate these algorithms were in predicting criminal acts.

Question

Steps towards a desired future and avoidance of an undesired future

Civil society organisations, late night talk-show hosts and some editorial writers articulated their fears that the new predictive policing technologies would yield many false positives, and that perfectly innocent citizens could be victimised by the new technologies; that they could be placed on a police register without knowing why. There were worries about positive feedback loops in particular locales targeted for attention, leading to a greater number of arrests in these areas, and in turn to algorithms predicting that these were the areas on which the police should be concentrating. Had there been a blind trust in the efficacy of the algorithms, then this may well have been the case, but fortunately this concern had been raised so many times that the police and their algorithm developers were on guard for such phenomena.

By addressing these concerns directly, by instituting transparency measures and empowering oversight bodies, the police increased public trust and strengthened social cohesion. Predictive policing technologies helped the police to focus on areas of crime that were previously invisible. Data analysts uncovered these areas by training their PP algorithms with masses of information from disparate sources. This allowed the police to put more effort into tackling white collar crime and online hate crime. This in turn has had a ripple impact on international crimes such as people trafficking and drug smuggling. In fighting such crimes, the police noticed positive effects in communities that were otherwise subject to the attention of such smugglers. Overall, predictive policing has led to a decline in crimes. Criminals and their would-be accomplices now recognise that if they commit a crime, the likelihood of getting caught is higher than ever, even though there are lingering worries about the inevitability of at least some false positives which could lead to the imprisonment of innocent people.

The police also appreciated the new technologies as they found that the effective intelligence led to their approaching volatile situations with an enhanced awareness of how those situations were likely to play out. These days, it’s rarely the case that a police officer finds himself unexpectedly in the middle of a riot and fearing for his life.

Predictive policing technologies have especially emphasised the prevention of crimes – not only by minutes or hours, but also on the factors that lead to criminality. The initial emphasis on street crime led to an outcry by CSOs, the media and citizens that such technologies were ignoring corporate crime which has a much bigger impact on society as a whole. Always loving a challenge, data scientists recently developed new smart information systems that are expected to significantly enhance the detection of corporate crime and questionable practices. These new technologies are bringing ethicists and data scientists together, which is expected to greatly benefit European competitiveness. 

Recommendations for a desired future and avoiding an undesired future

From the above steps, we extract the following key recommendations to reach a desired future and avoid an undesired future:

  • Clear and transparent criteria for personal data should be entered into law enforcement databases.
  • Member States should have or establish an independent authority of sufficient size and clout to monitor the data in and use of law enforcement databases and offer commendations or impose penalties where appropriate.
  • Measures in preventive policing and community investment should supplement developments in predictive
  • Law enforcement authorities should have a balanced approach to local, white-collar and online hate crimes and should not unduly emphasise street crime prevention at the expense of curtailing white-collar crime, for example.
  • LEAs should offer more (effective) training of police officers and database operators as to the limitations of data analysis, particularly concerning rates of false positives.
  • The EU should sponsor research on automatically detecting when an attack is being planned and discussed on criminal forums, and on predicting future threats.

Questions

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt