The Fringe | Conspiracy, News, Politics, and Fun Forum!

Full Version: Predictive Crime Algorithms
You're currently viewing a stripped down version of our content. View the full version with proper formatting.

In a city with a long history of law-enforcement friction, activists and the Los Angeles Police Department are squaring off again. The latest crime-fighting controversy isn't over issues like police brutality, corruption or gangs — it's all about data.

And the ways police are using it.
Activists at a public meeting with the Los Angeles Police Commission this summer held up signs reading: "Data Driven Evidence Based Policing = Pseudoscience," and "Crime Data is Racist." It's an example of how the community has been put on edge by the LAPD's use of an elaborate data collection centre, a shadowy data analysis firm called Palantir, and predictive algorithms to try to get a jump on crime.
"We're trying to get better about where to put scarce police resources to prevent crime from happening in the first place," says Deputy Chief Sean Malinowski, who was at the meeting to defend the use of data analytics to help guide policing activity.
"Implicit bias is something the LAPD has been struggling with," counters Jamie Garcia, of the Stop LAPD Spying Coalition, a tiny community group which monitors the LAPD.
"The data they're using is the data that's collected by law enforcement," Garcia adds, saying that as a result it's inherently flawed.

Los Angeles isn't the only place where concerns are flaring over how citizens' data is collected and used by law-enforcement authorities.
Police forces across the U.S. are increasingly adopting the same approach as the LAPD: employing sophisticated algorithms to predict crime in the hope they can prevent it. Chicago, New York City and Philadelphia use similar predictive programs and face similar questions from the communities they are policing, and even legal challenges over where the information is coming from and how police are using it.
Canadian police forces are very aware of what their U.S. counterparts are doing, but they are wary of jumping in with both feet due to concerns over civil liberties issues.

In May of 2010, prompted by a series of high-profile scandals, the mayor of New Orleans asked the US Department of Justice to investigate the city police department (NOPD). Ten months later, the DOJ offered its blistering analysis: during the period of its review from 2005 onwards, the NOPD had repeatedly violated constitutional and federal law.

It used excessive force, and disproportionately against black residents; targeted racial minorities, non-native English speakers, and LGBTQ individuals; and failed to address violence against women. The problems, said assistant attorney general Thomas Perez at the time, were “serious, wide-ranging, systemic and deeply rooted within the culture of the department.”
Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.

Predictive policing algorithms are becoming common practice in cities across the US. Though lack of transparency makes exact statistics hard to pin down, PredPol, a leading vendor, boasts that it helps “protect” 1 in 33 Americans. The software is often touted as a way to help thinly stretched police departments make more efficient, data-driven decisions.
But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

Fourteen police forces across the UK are using algorithms to predict crimes, according to new research, and human rights campaigners are concerned that the “sinister” programs will lead to biased decisions and privacy breaches.
Data compiled by the campaign group Liberty through a series of freedom of information requests, published for the first time in a report on Monday, reveals that forces including Avon and Somerset, Greater Manchester and West Midlands are using software to predict where crimes might happen and even the likelihood of an individual committing a crime.
The campaigners warn that the widespread adoption of the technology is a serious threat to civil liberties. Instead of providing effective new insights to cut crime and make neighbourhoods safer, they worry, it will lead to more unfair targeting of already over-policed communities.
“Life-changing decisions are being made about us that are impossible to challenge, and even the police often don’t know how the machines make their predictions,” Hannah Couchman, policy and campaigns officer at Liberty, told BuzzFeed News.
In particular, Liberty is concerned that the algorithms will magnify a racial bias in British policing that has led to black citizens being three times more likely to be arrested than white ones. That, they say, will worsen relationships with minority groups that have historically been unfairly targeted.
The technology also raises serious privacy concerns at a time of growing public anxiety about “big data” being used to monitor behaviour, the campaigners say.

So apparently police are using software to predict crime before it happens, kind of like in that Tom Cruise movie Minority Report - S
What if police officers had the power to predict both the time and the exact location of a crime before it happened? Such knowledge would allow them to be on the scene ahead of time, stopping illegal incidents before they ever had the chance to play out.
Law enforcement officials don't yet possess these superhero-like abilities themselves, but police in Los Angeles, Calif. can now predict when and where crimes such as car theft and burglary will occur thanks to a computer program called PredPol.
Developed by a startup based in Santa Cruz, Calif., PredPol pinpoints the locations at which crimes are most likely to occur by gathering data and applying algorithms. Initially developed to predict earthquakes , the software has already been tested in Santa Cruz and Los Angeles.
Now, the Los Angeles Police Department has announced that it will deploy the predictive policing software on a much larger scale, using it to monitor at least six different precincts. The rollout follows initial tests in the city's Foothill precinct, after which police officers reported a 25 percent drop in reported burglaries in the neighborhoods to which the program directed them.
See that ghetto corner? That sketchy dude is selling crack. He will do it tomorrow too. There, saved you millions of dollars in software design costs.
(02-21-2019, 05:51 PM)Mulciber Wrote: [ -> ]See that ghetto corner? That sketchy dude is selling crack. He will do it tomorrow too. There, saved you millions of dollars in software design costs.

Jptdknpa Jptdknpa
and you don't think hackers can or could change and or alter info to produce results to askew or predict a certain outcome, HMMMMMM,
A VERY wrong way to fight crime!
How about... Instead of a bajillion dollar system, we spend ALL that money actually putting beat cops on the actual street instead of carcoons? I guess humans are too expensive to employ because they need medical insurance and retirement funds. Plus you can't skew their data anytime you want.
The racial demographics of an area can fairly accurately predict the amount and the type of crimes.
(02-21-2019, 10:40 PM)LucyBarnable Wrote: [ -> ]The racial demographics of an area can fairly accurately predict the amount and the type of crimes.

And, sadly, income levels.
Just for fun...

There are officially 94,725 bank branch offices in the US.

The average bank robbery nets about $2500 in cash. If every branch bank in the entire USA was robbed every year for $2500, the cost is only around $237 million.

You can be sure that any big rollout of some new toy is going to cost a billion dollars. It's really cheaper to just let the banks get robbed.
Oh my. Hiding3