Quantcast
Channel: Philosophy – William M. Briggs
Viewing all articles
Browse latest Browse all 529

Why Attempts To Control Opinion Cause Opinion To Diverge

$
0
0

From Mars, John Carter reports in a wonderful essay discussing the Regime’s attempts at manipulation says this:

People don’t like being gaslit. The psychological techniques used by salesmen, con men, and pickup artists are highly effective right up until the point at which the target becomes aware of the game. Awareness brings emotional blowback, making further manipulation effectively impossible because the target now regards every piece of information originating from the manipulator with hostile suspicion.

Now this is true and most know how it applies in daily life. Interactions with salesmen, especially of sketchy products, starts with disbelief because of this. With politicians, too.

What’s interesting is that this phenomenon can be understood in terms of probability. So can the next related idea, which will turn out to be important in government control of “misinformation” and “disinformation”.

Take some topic that is not being touted by propagandists, the government, universities or businesses in any large way, but about which there is not total ignorance neither. Say, Russia’s stance toward Ukraine before the start of its war. But pick in your mind any example you like.

At Day One of the war, and before the media wrapped its tentacles around the public mind, if we did a survey (one of, for instance, people who read at least one book a year) about the proposition “Russia’s view towards Ukraine is Russian self-preservation”, we’d have a range of opinions. Some would say the proposition is highly likely to be true, and some would say highly unlikely. Most would probably hover somewhere in the middle.

Here’s where theory enters. It is thought that as information about a topic increases, and of course is understood, that wherever people’s opinions are at the beginning, as they learn more they must come to greater agreement. This is, for example, the theory behind everything from universal education to “raising awareness”, and its implementation leads to ideas like “nudging” and “disinformation” control.

But, as my minor well-poisoning hinted, that’s not always what happens. As the media got involved in the Russian war, for instance, opinions “polarized”, and we saw many Ukrainian flags in social media bios, but also new Russian ones. The great middle largely fled to the extremes. Opinions did not coalesce, but grew apart.

It turns out that this can be analyzed in a surprising way, that we all first learned about from ET Jaynes, in his “Queer Uses for Probability Theory”—Chapter 5 in Probability Theory.

The reason some think opinions should coalesce is because of theory. I need only one small equation to demonstrate this. It’s easy: stick with me.

We’ll call the proposition of interest Y, as in Y = “Russia’s view towards Ukraine is Russian self-preservation”. But it can be anything. Jaynes used that a certain lady had ESP powers, always a favorite subject. Now all probabilities have to have conditions, evidence that is assumed, is, or is accepted as true. Let that be called E. E can be a long string of propositions itself, and is, especially on complex subjects.

So for person number i we have:

     Pr(Y | E_i)

which reads “The probability Y is true assuming E is true, for individual number i”. Simple as promised, yes?

This need not be a number, and only is if E_i has inside it assumptions which tell us how to quantify the probability. But, for a first cut, we can assume the answer is at least a rough number. It doesn’t really matter.

If we look at Pr(Y|E_i) for any number of people, i = 1, 2, 3, …, then we’ll find estimates all over the place, regardless of Y, unless the same E is accepted by all or most. Then Pr(Y|E_i) = Pr(Y|E_j), for any two individuals i and j.

The theory is this.

New information arises that everybody in our pool of people sees. This, too, is in the form of propositions. We can call the first new piece, say, R. R is a bit of news, or an announcement, or fact. Anything that all now see. This means everybody’s augmented probabilities now look like this:

     Pr(Y | R E_i)

We can get to this using Bayes’s formula, if it helps. But it’s not necessary. Bayes is only a useful tool, and nothing more. This is now the probability of Y given R and E_i.

Then a second piece of information arrives. To keep track, we’ll call the first piece R_1 and the second R_2. Then we have:

     Pr(Y | R_1 R_2 E_i)

which is now, as you can guess, the probability Y is true given R_1 and R_2 and E_i.

You have the idea (I hope). The more R’s we add, the more everybody’s information comes to resemble each other’s, so that even if we start at different places, because of those E_i, we end more or less the same place. In notation:

     Pr(Y | R_1 R_2 … R_m E_i) ~ Pr(Y | R_1 R_2 … R_m E_j)

which reads the probability Y is true given all those Rs and E_i is about equal to the probability Y is true given all those Rs (there are m of them) and E_j, for some second person.

Jaynes works all this out using Bayes, to show the math is scrupulous, which it is. Theory thus seems to say that as new information arises everybody comes to at least rough agreement.

It’s a true theory, too. And it sometimes works. But only when Y is not “controversial” and the source of the Rs is trustworthy.

All we have to do to break the theory is add one more premise to the right hand side. A premise many hold in certain situations. This one (or a variant): L = “The source of the R is full of it. They would lie about Y if it is to their benefit, either by omission or commission.”

Let’s use a homelier example. Let’s let Y = “Locking down will keep me from catching a communicable respiratory disease.”

Before the covid panic hit if you asked folks what was their belief about Y, you’d have a range of opinion, perhaps with many estimates giving Y low probability. After all, locking healthy people inside their homes had never been tried before, and it makes little sense given that sickness peaks every January (in the Northern Hemisphere) when everybody goes inside to spread their diseases.

Then the panic hit, and hit hard. And we all heard this: R = “Two weeks to stop the spread!”

Many believed the source, and their opinion about Y went way up. But not for everybody. A few held the additional premise, L = “This is all modeling bullshit, put out by a character who has been serially wrong, and who is obviously wrong again”. Those who held L, and there were not many, then had lower opinions about Y.

As the panic progressed, and it became clear Experts were (to quote a pop source) stuffed absolutely full of wild blueberry muffins, more people adopted L or one of its obvious variants. Then, even as Experts issued more and more Rs about how wonderful it was to have “non-essential” people off the street, those who held L had lower and lower opinions about Y.

You know where we are now. There are still many who hold a high value for Y, ever trusting in Experts as they do, but there are many more who do not. Opinions have diverged. Indeed, the more information issued by Experts, the greater the divergence.

What Jaynes showed was that the math for this phenomenon is also scrupulous, and explains better what happens in these case than the naive theory.

This has many important consequences, some of which you’ll be able to see if you’re familiar with what is called the “heuristics and biases” literature in economics (if you know it, think of Linda the activist bank teller example). What some researchers hold are errors in thinking turn out to be nothing more than people answering different questions than those posed by researchers.

All this will also turn out to be of great interest the more the Regime tries to push Official Truths and anathematizes Official Disinformation. Because many don’t trust the source, these efforts will backfire in a very predictable way, driving opinions further apart, not closer.

I have two small papers coming which extends these ideas in a minor way. Which I’ll put up and explain when they’re published (should be soon). But they are mathematical and I didn’t want to dump them without first explaining the idea in (what I hope are) simpler terms.

The gist: the harder distrusted sources try to control “the narrative”, the more damage they do to the sources.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.


Viewing all articles
Browse latest Browse all 529

Trending Articles