The Wisdom of Crowds is an interesting phenomenon. Here are some articles on the topic:
- How to unleash the wisdom of crowds
- Knowledgeable individuals protect the wisdom of crowds
- On the Wisdom of Crowds
The last of these articles is of particular interest. It explicates my earlier hunch that the Wisdom of Crowds phenomenon has to do with something like the law of large numbers:
Surowiecki’s archetypal example comes from a 1906 county fair where 800 people participated in a contest to guess what the weight of an ox would be after it was butchered. The average guess was 1,197 pounds. The actual weight turned out to be 1,198 pounds. On its face, this seems like a dramatic testament to the ideals of democracy, but the accuracy of the average guess has much more to do with the nature of the problem than with the wisdom of the crowd.
Their task was clearly-defined and required no special information. Each person was free to guess any weight they wanted, but the higher or lower their guess, the more obviously wrong it would be. Random variation ensured that every high guess was counter-balanced by a low guess that was equally off the mark. After 800 such guesses, the average would stick right in the middle. In this case, the average happened to be the truth.
You can tease the same kind of wisdom out of a handful of dice. Say you hold a contest to guess the number you’re thinking of: 3.5. Only six-sided dice can enter this contest and, therefore, all guesses will range from 1-6. (Note that each die is physically incapable of guessing correctly, as dice can only express whole numbers.) Each die can enter the contest as many times as it wants and, eventually, you gather several hundred entries. Miraculously, the average “guess” is exactly 3.5! Again, the average just happens to be the truth.
The trick is that truly diverse (i.e. random) opinions will always vary around the mean. When you aggregate a whole lot of random opinions, you get a deceptively precise average, but this is not “wisdom” in any real sense. It’s a statistical artifact called the Law of Large Numbers and it has nothing to do with intelligence.
There are two frustratingly common factors that throw this trick right off the rails. The first is communication, as discussed above. It leads to the primacy effects and power law distributions that plague news aggregator sites. The second is bias that arises from common wisdom… or lack thereof.
What if you asked a crowd to answer the following well-defined question: “What is the distance to Alpha Centauri?” Because astronomical distances are so much larger than anything in a normal person’s experience, their guesses would probably fall short of 25 trillion miles. (An astronomer, on the other hand, would be right on the money.) In this case, the average just isn’t the truth.
WikiLeaks has recently released a collection of confidential documents that originated from within Hillary Clinton’s camp. They also claim that there will be more documents forthcoming within the next few months. Here are some articles on the matter:
- What Julian Assange’s War on Hillary Clinton Says About WikiLeaks
- WikiLeaks founder Julian Assange claims organisation will release ‘significant’ Hillary Clinton campaign data
- WikiLeaks founder says the problem with leaking material about Trump is that nothing can compare with ‘what comes out of Donald Trump’s mouth’
One thing that I find problematic about these releases is that they are confined to just one of the two main presidential candidates. It might very well be the case that Donald Trump has done a ton of things that would be similarly damaging to his campaign. Without omniscience, without knowing that there is no such damaging information about Donald Trump, is it right to only release damaging information about the one candidate you happen to have the dirt on?
Sure there is a sense in which it is appropriate to disclose such information for the sake of truth and transparency. But in a situation such as an election, where damaging revelations can effect the outcome, the lack of total knowledge and asymmetry with regards to candidate revelations feels problematic.
To quote Wikipedia, “In news media an echo chamber is a metaphorical description of a situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, where different or competing views are censored, disallowed, or otherwise underrepresented”.
The internet and social media have really increased the prevalence of echo chambers. Here are some articles on the phenomenon:
- Is anyone immune to the social media echo chamber?
- The web’s ‘echo chamber’ leaves us none the wiser
It has become apparent that Twitter is largely (at least for me) a left-wing echo chamber. This makes me wonder about the possibility of an opposite effect, whereby someone aligned with one end of a spectrum moves some degree away from it as they become averse to the regressiveness, amplification, repetition and uncritical reinforcement conduced by the echo chamber.
Recently released: The Routledge Handbook of Philosophy of Information.
Information and communication technology occupies a central place in the modern world, with society becoming increasingly dependent on it every day. It is therefore unsurprising that it has become a growing subject area in contemporary philosophy, which relies heavily on informational concepts. The Routledge Handbook of Philosophy of Information is an outstanding reference source to the key topics and debates in this exciting subject and is the first collection of its kind. Comprising over thirty chapters by a team of international contributors the Handbook is divided into four parts:
- basic ideas
- quantitative and formal aspects
- natural and physical aspects
- human and semantic aspects.
Within these sections central issues are examined, including probability, the logic of information, informational metaphysics, the philosophy of data and evidence, and the epistemic value of information.