The last of these articles is of particular interest. It explicates my earlier hunch that the Wisdom of Crowds phenomenon has to do with something like the law of large numbers:
Surowiecki’s archetypal example comes from a 1906 county fair where 800 people participated in a contest to guess what the weight of an ox would be after it was butchered. The average guess was 1,197 pounds. The actual weight turned out to be 1,198 pounds. On its face, this seems like a dramatic testament to the ideals of democracy, but the accuracy of the average guess has much more to do with the nature of the problem than with the wisdom of the crowd.
Their task was clearly-defined and required no special information. Each person was free to guess any weight they wanted, but the higher or lower their guess, the more obviously wrong it would be. Random variation ensured that every high guess was counter-balanced by a low guess that was equally off the mark. After 800 such guesses, the average would stick right in the middle. In this case, the average happened to be the truth.
You can tease the same kind of wisdom out of a handful of dice. Say you hold a contest to guess the number you’re thinking of: 3.5. Only six-sided dice can enter this contest and, therefore, all guesses will range from 1-6. (Note that each die is physically incapable of guessing correctly, as dice can only express whole numbers.) Each die can enter the contest as many times as it wants and, eventually, you gather several hundred entries. Miraculously, the average “guess” is exactly 3.5! Again, the average just happens to be the truth.
The trick is that truly diverse (i.e. random) opinions will always vary around the mean. When you aggregate a whole lot of random opinions, you get a deceptively precise average, but this is not “wisdom” in any real sense. It’s a statistical artifact called the Law of Large Numbers and it has nothing to do with intelligence.
There are two frustratingly common factors that throw this trick right off the rails. The first is communication, as discussed above. It leads to the primacy effects and power law distributions that plague news aggregator sites. The second is bias that arises from common wisdom… or lack thereof.
What if you asked a crowd to answer the following well-defined question: “What is the distance to Alpha Centauri?” Because astronomical distances are so much larger than anything in a normal person’s experience, their guesses would probably fall short of 25 trillion miles. (An astronomer, on the other hand, would be right on the money.) In this case, the average just isn’t the truth.
To quote Wikipedia, “In news media an echo chamber is a metaphorical description of a situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, where different or competing views are censored, disallowed, or otherwise underrepresented”.
The internet and social media have really increased the prevalence of echo chambers. Here are some articles on the phenomenon:
It has become apparent that Twitter is largely (at least for me) a left-wing echo chamber. This makes me wonder about the possibility of an opposite effect, whereby someone aligned with one end of a spectrum moves some degree away from it as they become averse to the regressiveness, amplification, repetition and uncritical reinforcement conduced by the echo chamber.
I became aware of the Young Turks and their main man Cenk Uygur earlier this year. As the months have gone by and I have watched more of their YouTube clips, Uygur’s arrogance, ignorance and general thickheadedness has become more apparent.
One conversation that I found interesting is the one Uygur had with Sam Harris, particularly the following portion, as it involves discussion relevant to truthlikeness and probability:
In this discussion, Harris makes the point that Mormonism is slightly more improbable/absurd than other Christian faiths because it makes the more specific claim that Jesus will return to Jackson County, Missouri rather than the more general claim that he will return to somewhere on Earth.
Some theoretical mulling: given the reports that some #Brexit leave voters regretted their decision, I’m wondering about the possibility of having a voting system whereby (1) people vote first round (2) the results are made public (3) people can change their vote in the second round with knowledge of the first round result. I say this with a general interest in voting procedures, not because I have a particular position in this referendum.
Abstract: Methods for the updating/merging of logical databases have traditionally been mainly concerned with the relations between pieces of data and the logical coherence of operations without as much concern for whether the datasets resulting from such operations have epistemically valuable properties such as truth and relevance. Gardenfors for example, who developed the predominant AGM framework for belief revision, argues that the concepts of truth and falsity become irrelevant for the analysis of belief change as “many epistemological problems can be attacked without using the notions of truth and falsity”.
However this may be, given that agents process incoming data with the goal of using it, this lacuna between updating/merging and epistemic utilities such as truth and relevance merits attention. In this talk I address this issue by looking at some ways in which updating/merging methods can be supplemented and shaped when combined with formal measures of truthlikeness, including cases where integrity constraints are involved.
In October I will be heading up to Wollongong to present at CaféDSL, a weekly research seminar hosted by the Decision Systems Lab in the University of Wollongong School of Computer Science and Software Engineering.
Date and Time: Tuesday 21st, October, 2014. 4pm.
Venue: 6.105 – Smart Building
Title: Revising beliefs towards the truth
Abstract: Traditionally the field of belief revision has been mainly concerned with the relations between sentences (pieces of data) and the logical coherence of revision operations without as much concern for whether the dataset resulting from a belief revision operation has epistemically valuable properties such as truth and relevance. Gardenfors for example, who developed the predominant AGM framework for belief revision, argues that the concepts of truth and falsity become irrelevant for the analysis of belief change as “many epistemological problems can be attacked without using the notions of truth and falsity”. However this may be, given that agents process incoming data with the goal of using it for successful action, this lacuna between belief revision and epistemic utilities such as truth and relevance merits attention.
In this talk I address this issue by presenting some preliminary results concerning the combination of formal truthlikeness/verisimilitude measures with belief revision/merging.