Upcoming Talk: Revising beliefs towards the truth

In October I will be heading up to Wollongong to present at CaféDSL, a weekly research seminar hosted by the Decision Systems Lab in the University of Wollongong School of Computer Science and Software Engineering.

Date and Time: Tuesday 21st, October, 2014. 4pm.

Venue: 6.105 – Smart Building

Title: Revising beliefs towards the truth

Abstract: Traditionally the field of belief revision has been mainly concerned with the relations between sentences (pieces of data) and the logical coherence of revision operations without as much concern for whether the dataset resulting from a belief revision operation has epistemically valuable properties such as truth and relevance. Gardenfors for example, who developed the predominant AGM framework for belief revision, argues that the concepts of truth and falsity become irrelevant for the analysis of belief change as “many epistemological problems can be attacked without using the notions of truth and falsity”. However this may be, given that agents process incoming data with the goal of using it for successful action, this lacuna between belief revision and epistemic utilities such as truth and relevance merits attention.

In this talk I address this issue by presenting some preliminary results concerning the combination of formal truthlikeness/verisimilitude measures with belief revision/merging.

Truthlikeness Confirmation?

As has been established in the literature, given some truthlikeness/verisimilitude measure Tr(), theory T and evidence E, we can measure the estimated truthlikeness of T given E with:


\text{Tr}_{\text{est}}(T | E) = \displaystyle\sum_{i = 1}^s \text{Tr}(T, w_{i}) \text{Pr}(w_{i} | E)

for each state w_{i} in the logical space.

Now, using a Bayesian confirmation measure such as the following:


\text{C}(E, T) = \text{Pr}(T | E) - \text{Pr}(T)

we can combine it with the estimated truthlikeness measure to get a measure of truthlikeness confirmation:


\text{Tr}_{\text{C}}(T, E) = \text{Tr}_{\text{est}}(T | E) - \text{Tr}_{\text{est}}(T)

So what can be done with this measure? In A Verosimilitudinarian Analysis of the Linda Paradox, the authors suggest this measure for what they term a ‘verisimilitudinarian confirmation account’ of the Linda paradox (they do so in response to a problem with an earlier proposal of theirs that gives an account of the paradox based on estimated truthlikeness alone). But it seems that this approach is doing nothing that an account of the Linda paradox in terms of confirmation alone isn’t already doing.

Thus it would be interesting to think about this idea of truthlikeness confirmation some more. For starters, clearly confirmation and truthlikeness confirmation do not increase/decrease together. Take a logical space with three propositions p1, p2 and p3 and a uniform a priori probability distribution amongst the eight possible states:

  • Whilst (p_{1} \wedge p_{2} \wedge p_{3}) \vee (\neg p_{1} \wedge \neg p_{2}) confirms p_{1} \wedge p_{2} \wedge p_{3} it results in a negative truthlikeness confirmation.
  • Whilst p_{1} \wedge p_{2} \wedge \neg p_{3} disconfirms p_{1} \wedge p_{2} \wedge p_{3} it results in a positive truthlikeness confirmation.

Explicating a Standard Externalist Argument against the KK Principle

Title: Explicating a Standard Externalist Argument against the KK Principle

Abstract: The KK principle is typically rejected in externalist accounts of knowledge. However, a standard general argument for this rejection is in need of a supportive explication. In a recent paper, Samir Okasha argues that the standard externalist argument in question is fallacious. In this paper I start off with some critical discussion of Okasha’s analysis before suggesting an alternative way in which an externalist might successfully present such a case. I then further explore this issue via a look at how Fred Dretske’s externalist epistemology, one of the exemplifying accounts, can explain failure of the KK principle.