Thursday, December 12, 2013

The Thought Police.


Courtesy C.I.A.
















by Louis Shalako





Thought police may not be too far off into the future, and oddly, time-cops as well. Read the following passage very carefully and you’ll see they use the term ‘future crime.’

(Cops are already solving crimes long in the past. They do it in the present moment, not by time-travel.)

“The National Institute of Justice defines predictive policing as ‘taking data from disparate sources, analyzing them and then using the results to anticipate, prevent and respond more effectively to future crime.’ Some of these disparate sources include crime maps, traffic camera data, other surveillance footage and social media network analysis. But at what point does the possibility of a crime require intervention? Should someone be punished for a crime they are likely to commit, based on these sources? Are police required to inform potential victims?* How far in advance can crimes be forecasted?”

They also mention ‘social media network analysis.’ (See: intelligence-gathering network.)

Preventive policing sort of ignores any presumption of privacy on the part of the individual.

There are those who will say, “Well, if you aren’t doing anything wrong, you have nothing to worry about.”

Let’s extend that.

“If you aren’t thinking anything wrong, then you have nothing to worry about…”

This is the door the thought police come in, isn’t it? They might even kick it in.

The right to privacy of our own thoughts is now open to question.

The future is already here, for we have had instances of crime prevention when cops get a tip that someone is threatening someone through the use of social media. If an arrest is made, a future crime may well have been prevented.

But in the broader sense of the article preventive policing takes a lot of numbers from a lot of places.

It assigns weights or values to each factor that goes into any person’s make-up at any given time.

Over the course of our life, our circumstances change, and so would our ‘personal algorithm.’

The risk factors change, and at some point in our life we may have reached a low point. This can be measured against a previous high point, a threshold of danger or risk may be reached, and a little bell goes off down at police headquarters.

If our subject, a guy called Edwin, living in Lincoln, Nebraska, has a personal algorithm, one based on all the data that can be gathered from monitoring his social interactions, using biometric recognitions and mood analyses from gas station security cameras, from his shopping habits, from recognizing his license plate at stop-light intersections, from semantic analyses of his postings on Facebook, by key-word recognition, the thought police might very easily determine that Edwin is ‘at risk’ to offend against the municipal, state, or federal laws.

Every little thing Edwin says is being taken down so that it can be used against him, but the cops are just doing their jobs, right?

They may determine on an intervention. They may wish to prevent him from assaulting his ex-girlfriend, or from committing suicide, or robbing a bank or starting up a meth lab or violating any other recognizable statute.

What if Edwin has a history of alcoholism and the cops are notified that he just bought and insured a vehicle? 

Maybe he’s been seen at a gas station, not too far from the liquor store?

Maybe they should put a car nearby and take a look at Edwin?

A lot of nice, well-meaning, thoughtful people would even applaud that. They might stop Edwin from going head-on into a minivan with a mother and four children in it later that night.

Sounds like a good idea, right?

Unfortunately, he hasn’t actually done anything yet. He’s merely ‘at risk’ and arguably others are at risk from Edwin—in the future. Maybe. Maybe even most likely.

The legislation which enables preventive policing has carefully written clauses regarding how an offender poses a ‘public or private menace,’ or whatever.

What are you going to do with Edwin?

Are you going to sentence him to thirty days in the county bucket?

Are you going to stick him in with other offenders of a more serious nature? Is his cell-mate a member of a drug-running bike gang? Is he a thief, a con-artist, does he grow dope, does he run illegal aliens over the border?

Edwin will be exposed to more criminality. Jail has been called a university of crime.

Will you take Edwin to the hospital for a period of observation?

Will a court order him to attend to a psychiatric or other program, one designed to help at-risk future offenders to work through their issues and move on with their lives in a more positive direction?

How are you going to pay for all of that?

And how is Edwin going to like being grabbed, losing his job, consequently losing his home, and ending up on the street because someone decided that he was a risk? Even though he never actually did anything? 

Except be an alcoholic, buy a car and get some gas, bearing in mind that he’s upset with his ex-girlfriend?

If he gets desperate enough, out there on the street, he might just remember that he had a cell-mate that promised to set him onto something good, some easy money kind of operation and Edwin might not have much going for him to begin with, and so he might just look his new friend up.

What’s really terrifying is the combination of privatized prisons, shrinking state budgets, the need to keep all those beds filled in a private jail to keep profits flowing to shareholders, and there have already been abuses.

Throw mandatory-sentencing legislation into the mix and some robot guards, and you have a potent brew.

That’s because we have different levels of crime, and therefore we must have different levels of future crime. 

The corollary of this would be different levels of punishment.

The lowest level is simple larceny—and stealing someone’s lawn mower is somehow seen as less serious when compared to sticking up a gas station attendant with a shot-gun in his face and running off with the proceeds.

Higher levels of crime (and punishment) involve assault, murder, and there is the whole range of crime from prostitution, domestic abuse, kidnapping, extortion, counterfeiting. The whole list.

Here’s where Edwin’s personal algorithm comes into play again.

If Edwin’s prior history includes assault, and maybe he got picked up with a weapon when he was prohibited from owning one, maybe he’s been convicted once or twice for little things, then the charge of the possible future crime he is being accused of being potentially able of maybe committing someday becomes more serious.

A conviction, would lead to a more serious sentence, wouldn’t it, or at least shouldn’t it? By any rational measure…?

And a simple psychiatric intervention would involve a longer period of observation, wouldn’t it, if the signs were serious enough, and if the risk to some other person was considered great enough, and if Edwin under further examination did not prove amenable to suggestion or did not sort of give all the right answers.

Who is going to pay for all the extra beds in local hospitals? Or special wards in local jails?

The goal of predictive policing is of course to prevent Columbine-style massacres and terrorist attacks, but it involves monitoring and profiling an entire population of individuals at all times.

Where do you set the filter? In other words, when do you cut it off as not serious enough and just ignore it?

And wouldn’t that cut-off itself be abused, in a particularly bigoted jurisdiction, to take all the wrong sort of people off the street, so nice people could ‘feel safe’ in their own neighbourhoods, or maybe to take one racial group off the streets so they would no longer compete for unskilled jobs with poor folks of the dominant race?

That’s already being done now, isn’t it, in some jurisdictions?

In my opinion the best cops have no hate in them, no bigotry, no prejudice. But there’s nothing to stop a bigot from joining the force and working his way up in it.

There's nothing to stop a bigot from running for sheriff or being elected governor, or even president.

No one has ever successfully managed to legislate for enlightenment, but then, no one has ever successfully legislated against prejudice and bigotry.

***

Preventive policing might even work, in that you would get arrests, and in the case of would-be terrorists, you might even find a truckload of explosive all ready to go, and a group or individual all set to carry out some plan.

In that sense, it would have been a success. That success would get high praise in the media.

Google has launched semantic search,** and I just read Facebook*** is doing heavy research into artificial intelligence, using the vast quantities of data they have gathered from us, quite frankly.

Semantics is the analysis of meaning. Artificial intelligence would use semantics to determine meaning, and with all the world now wired through a number of networks, using our phones, our devices, computers, automobile navigation systems, surveillance cameras, and a whole host of other sources of information, artificial intelligence will be used in preventive policing because it would require an almost infinite amount of manpower just to crunch the numbers and interpret data.

Preventive policing requires software and computer time, lots of it.

While a light might come on or a buzzer might sound on some police dispatcher’s board somewhere when Edwin tripped the threshold on his own personal algorithm, it is not quite clear whether the local police would consider it a high priority.

As long as people still had rights, they could always get a lawyer after some period of incarceration, or ‘observation,’ or even ‘treatment,’ and come back with a successful suit in a court of law.

My personal opinion is that the enabling legislation would have thought of that too—and done whatever was necessary to insulate the authorities from excessive responsibility for any mistakes that are made, or the inevitable civil and human rights violations that will surely occur.

But when you realize that most at-risk people really don’t have the resources to defend themselves in the first place, nor the resources to come back later, nor even to appeal ‘a wrongful conviction’ while they sit in a jail and rot—how in the hell that would ever be proven is also a good question—then a vast prison population composed of ‘at-risk’ individuals like Edwin doesn’t seem all that far-fetched.

It is almost a law of technology that all really revolutionary technologies bring disruption, they cause great and often unforeseen changes in the social context.

The infrastructure is already in place. It’s just a matter of time before this happens to some extent.


END

*Are police required to notify future victims?

What about potential future perps? Would a record of warnings or tickets be kept, and of course wouldn’t that also bear on the future outcome of a charge of ‘being at risk of committing a future breach of statute law?’

We got us a real can of worms here, ladies and gentlemen.

**Semantic search tries to predict the subject’s intentions, which of course has wider applications.

***Artificial intelligence would be used to draw conclusions based upon semantics, which may be defined as meaning, and multiple layers of deeper meaning. In the psychological sense, social theory would be used to define ‘at-risk’ indicative factors in any one person’s algorithm based upon past statistical analyses of individuals within social groups.

When these theories are based both on statistics and bigotry, ‘poverty breeds crime,’ for example, the possibility of abuse arises.



2 comments:

  1. Our current thought police in USA are the journalists that punish those for saying something that is not "politically correct"--especially if you are not a minority group who seem to be allowed to say anything with impunity.

    ReplyDelete
  2. Never apologize, and never admit wrongdoing!

    ReplyDelete

Please feel free to comment on the blog posts, art or editing.