New SEP entry on Belief Merging and Judgement Aggregation.
Including Philosophy and/of Information, Logic and Epistemology
In October I will be heading up to Wollongong to present at CaféDSL, a weekly research seminar hosted by the Decision Systems Lab in the University of Wollongong School of Computer Science and Software Engineering.
Date and Time: Tuesday 21st, October, 2014. 4pm.
Venue: 6.105 – Smart Building
Title: Revising beliefs towards the truth
Abstract: Traditionally the field of belief revision has been mainly concerned with the relations between sentences (pieces of data) and the logical coherence of revision operations without as much concern for whether the dataset resulting from a belief revision operation has epistemically valuable properties such as truth and relevance. Gardenfors for example, who developed the predominant AGM framework for belief revision, argues that the concepts of truth and falsity become irrelevant for the analysis of belief change as “many epistemological problems can be attacked without using the notions of truth and falsity”. However this may be, given that agents process incoming data with the goal of using it for successful action, this lacuna between belief revision and epistemic utilities such as truth and relevance merits attention.
In this talk I address this issue by presenting some preliminary results concerning the combination of formal truthlikeness/verisimilitude measures with belief revision/merging.
As has been established in the literature, given some truthlikeness/verisimilitude measure Tr(), theory T and evidence E, we can measure the estimated truthlikeness of T given E with:
for each state in the logical space.
Now, using a Bayesian confirmation measure such as the following:
we can combine it with the estimated truthlikeness measure to get a measure of truthlikeness confirmation:
So what can be done with this measure? In A Verosimilitudinarian Analysis of the Linda Paradox, the authors suggest this measure for what they term a ‘verisimilitudinarian confirmation account’ of the Linda paradox (they do so in response to a problem with an earlier proposal of theirs that gives an account of the paradox based on estimated truthlikeness alone). But it seems that this approach is doing nothing that an account of the Linda paradox in terms of confirmation alone isn’t already doing.
Thus it would be interesting to think about this idea of truthlikeness confirmation some more. For starters, clearly confirmation and truthlikeness confirmation do not increase/decrease together. Take a logical space with three propositions p1, p2 and p3 and a uniform a priori probability distribution amongst the eight possible states:
SEP entry on Truthlikeness has been substantially updated: http://plato.stanford.edu/entries/truthlikeness.
Good book: Review of “Quitting Certainties”.
Title: Explicating a Standard Externalist Argument against the KK Principle
Abstract: The KK principle is typically rejected in externalist accounts of knowledge. However, a standard general argument for this rejection is in need of a supportive explication. In a recent paper, Samir Okasha argues that the standard externalist argument in question is fallacious. In this paper I start off with some critical discussion of Okasha’s analysis before suggesting an alternative way in which an externalist might successfully present such a case. I then further explore this issue via a look at how Fred Dretske’s externalist epistemology, one of the exemplifying accounts, can explain failure of the KK principle.
I am inclined to think that externalist accounts of knowledge such as Dretske’s and Nozick’s lead to a distinction between knowing that p and knowing that p is true. A recent paper prompted me to write a response in which I try to explicate this idea. Does it sound plausible?
New article over at Philosophy Now: http://philosophynow.org/issues/98/Information_Knowledge_and_Intelligence