Disconnecting viewpoints from groups/ideologies

Our social/political spheres are undoubtedly hyper tribal, facilitated in no small part by the Internet and social media. A standard dichotomy for example is that of the left and right. Generally, one side is associated with a certain set of viewpoints and the other side is associated with an opposite set. It seems that when an individual who aligns with some group is determining which viewpoints should be included in their set of beliefs, rather than assessing each issue individually to decide where they stand, they are inclined to let the position taken by the group they affiliate with automatically determine their viewpoint.

Anyway, this is just a summary of a phenomenon that many would already understand. I was induced to write this post though because of a type of statement I have been seeing recently that really elucidates the matter. I forget where I saw it and forget what it was referring to, but here is an example statement:

I read it and have to agree with you. I hate to be on the same side as people like Ann Coulter but we have to have intellectual integrity.

I thoroughly dislike Ann Coulter due to her disagreeable attitude and viewpoints on many topics. However, is it rational for the position I adopt on some matter to be determined by the position someone I find disagreeable takes? Should one’s adopted position be determined from some set of predefined ‘axioms’ dictated by one’s affiliated ideology, or should each new question be freshly evaluated?

Of course, the former seems to often be the case in modern environments, where people let their affiliation automatically decide the position they adopt. Furthermore, in many cases affiliation loyalty means that they are likely to be intransigent in the light of evidence supporting an alternative view. I wonder if there is any legitimacy in one automatically basing their viewpoint on their affiliation. Perhaps at least to begin with? That is, rather than starting off with a suspension of judgement regarding some matter and adopting a position once some input has been received, initially adopt the position associated with the affiliation and if honest, update and change viewpoints if warranted by new information.

On the Wisdom of Crowds

The Wisdom of Crowds is an interesting phenomenon. Here are some articles on the topic:

The last of these articles is of particular interest. It explicates my earlier hunch that the Wisdom of Crowds phenomenon has to do with something like the law of large numbers:

Surowiecki’s archetypal example comes from a 1906 county fair where 800 people participated in a contest to guess what the weight of an ox would be after it was butchered. The average guess was 1,197 pounds. The actual weight turned out to be 1,198 pounds. On its face, this seems like a dramatic testament to the ideals of democracy, but the accuracy of the average guess has much more to do with the nature of the problem than with the wisdom of the crowd.

Their task was clearly-defined and required no special information. Each person was free to guess any weight they wanted, but the higher or lower their guess, the more obviously wrong it would be. Random variation ensured that every high guess was counter-balanced by a low guess that was equally off the mark. After 800 such guesses, the average would stick right in the middle. In this case, the average happened to be the truth.

You can tease the same kind of wisdom out of a handful of dice. Say you hold a contest to guess the number you’re thinking of: 3.5. Only six-sided dice can enter this contest and, therefore, all guesses will range from 1-6. (Note that each die is physically incapable of guessing correctly, as dice can only express whole numbers.) Each die can enter the contest as many times as it wants and, eventually, you gather several hundred entries. Miraculously, the average “guess” is exactly 3.5! Again, the average just happens to be the truth.

The trick is that truly diverse (i.e. random) opinions will always vary around the mean. When you aggregate a whole lot of random opinions, you get a deceptively precise average, but this is not “wisdom” in any real sense. It’s a statistical artifact called the Law of Large Numbers and it has nothing to do with intelligence.

There are two frustratingly common factors that throw this trick right off the rails. The first is communication, as discussed above. It leads to the primacy effects and power law distributions that plague news aggregator sites. The second is bias that arises from common wisdom… or lack thereof.

What if you asked a crowd to answer the following well-defined question: “What is the distance to Alpha Centauri?” Because astronomical distances are so much larger than anything in a normal person’s experience, their guesses would probably fall short of 25 trillion miles. (An astronomer, on the other hand, would be right on the money.) In this case, the average just isn’t the truth.

What is the Opposite of an Echo Chamber?

To quote Wikipedia, “In news media an echo chamber is a metaphorical description of a situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, where different or competing views are censored, disallowed, or otherwise underrepresented”.

The internet and social media have really increased the prevalence of echo chambers. Here are some articles on the phenomenon:

It has become apparent that Twitter is largely (at least for me) a left-wing echo chamber. This makes me wonder about the possibility of an opposite effect, whereby someone aligned with one end of a spectrum moves some degree away from it as they become averse to the regressiveness, amplification, repetition and uncritical reinforcement conduced by the echo chamber.

Probability, Truthlikeness and the Cenk Uygur versus Sam Harris Debate

I became aware of the Young Turks and their main man Cenk Uygur earlier this year. As the months have gone by and I have watched more of their YouTube clips, Uygur’s arrogance, ignorance and general thickheadedness has become more apparent.

One conversation that I found interesting is the one Uygur had with Sam Harris, particularly the following portion, as it involves discussion relevant to truthlikeness and probability:

In this discussion, Harris makes the point that Mormonism is slightly more improbable/absurd than other Christian faiths because it makes the more specific claim that Jesus will return to Jackson County, Missouri rather than the more general claim that he will return to somewhere on Earth.

Continue reading “Probability, Truthlikeness and the Cenk Uygur versus Sam Harris Debate”

#Regrexit

Some theoretical mulling: given the reports that some #Brexit leave voters regretted their decision, I’m wondering about the possibility of having a voting system whereby (1) people vote first round (2) the results are made public (3) people can change their vote in the second round with knowledge of the first round result. I say this with a general interest in voting procedures, not because I have a particular position in this referendum.

Upcoming Talk: Truth and integrity constraints in logical database updating/merging

I’ll be giving a talk later this month at the RMIT CSIT Seminar Series.

Date and Time: Friday 27th November, 2015. 11.30am – 12.30pm.

Venue: RMIT, Swanston St, Melbourne, Building 80 (Swanston Academic Building), Level 5, Room 12 (080.05.012)

Abstract: Methods for the updating/merging of logical databases have traditionally been mainly concerned with the relations between pieces of data and the logical coherence of operations without as much concern for whether the datasets resulting from such operations have epistemically valuable properties such as truth and relevance. Gardenfors for example, who developed the predominant AGM framework for belief revision, argues that the concepts of truth and falsity become irrelevant for the analysis of belief change as “many epistemological problems can be attacked without using the notions of truth and falsity”.

However this may be, given that agents process incoming data with the goal of using it, this lacuna between updating/merging and epistemic utilities such as truth and relevance merits attention. In this talk I address this issue by looking at some ways in which updating/merging methods can be supplemented and shaped when combined with formal measures of truthlikeness, including cases where integrity constraints are involved.