Agent-Relative Informativeness: Combining truthlikeness semantic information measures with belief revision

Here is the extended abstract for a paper I am trying to finish:

In recent work [1] a quantitative account of semantic information based on truthlikeness measures was proposed; statement A yields more information than statement B when A contains more truth or is closer to the whole truth than B. Given a way to measure the information yield of a statement A, an important distinction to make is that between the information yield of A and its informativeness relative to a particular agent. As the simplest of examples, take three true propositions p1, p2 and p3. Although the statement p1 & p2 has a greater quantitative information measure than p3, given an agent that already has p1 & p2 in their database, the statement p3 is going to be more informative than p1 & p2 for that agent.

In this paper the idea of agent-relative informativeness is explored. How informative some input statement is for an agent will not only depend upon its (1) information measure, but also on (2) what content the agent already has and also on what they do with the input if they accept it. In order to deal with (2), some framework for belief revision is needed. Thus agent-relative informativeness here involves a combination of truthlikeness/information measures and belief revision. As it so happens, within the last few years there has been some interest in investigating the relationship between the truthlikeness (verisimilitude) and belief revision programs [2, 3]. Continuing on from [1], in this paper the approaches to truthlikeness/information focused on are the Tichy/Oddie and Value Aggregate methods. The belief revision framework employed is predominantly the AGM one, though some alternatives are considered.

Apart from a general outline of the ideas associated with agent-relative informativeness, two further contributions of this paper are:

  • Some results on the behaviour of the belief revision operations of expansion, revision and contraction with regards to truthlikeness information measurements.
  • Suggestion of some formal frameworks to deal with conflicting sources of input, at least some of which by definition are going to be providing misinformation. This includes (1) combining screened belief revision with estimated information measurement and (2) development of a paraconsistent approach to belief revision.

References
[1] S. D’Alfonso. On quantifying semantic information. Information, 2(1):61 – 101, 2011. URL = <http://www.mdpi.com/2078-2489/2/1/61/>.
[2] Cevolani Gustavo and Franceso Calandra. Approaching the truth via belief change in propositional languages. In EPSA Epistemology and Methodology of Science: Launch of the European Philosophy of Science Association, pages 47 – 62, 2010.
[3] I. Niiniluoto. Theory change, truthlikeness, and belief revision. In EPSA Epistemology and Methodology of Science: Launch of the European Philosophy of Science Association, pages 189 – 199, 2010.

The Paradox of Inference and the Non-Triviality of Analytic Information

New paper in the Journal of Philosophical Logic with a new take on the Scandal of Deduction: The Paradox of Inference and the Non-Triviality of Analytic Information. Part of the abstract:

Hence, although analytically true sentences provide no empirical information about the state of the world, they convey analytic information, in the shape of constructions prescribing how to arrive at the truths in question.

Some more material on the matter:

An Analysis of Informational Relevance

Am currently working on a paper to be given at this years AAL conference. Tentative details:

Title: An Analysis of Informational Relevance

Abstract: In this presentation a logical definition and analysis of informational relevance is given. Relevance is taken to be agent-oriented/epistemic, where the relevance of a piece of information is determined in terms of how well it satisfies an agent’s request, how well it answers their question. Firstly, a general metric is outlined whereby the relevance of a statement is measured in terms of its truthlikeness measure. Secondly, a logic is given in which a relevance operator is defined and investigated. The erotetic foundation for this logic is Hintikka’s approach to analysing questions as requests for information in terms of epistemic modal logic, which is then combined with a logic of intention.

Any thoughts or suggestions are welcome.

The Logic of Being Misinformed

Have just uploaded a draft of a paper I am working on, titled ‘The Logic of Being Misinformed’, which can be downloaded here. Feedback welcome. Following is the abstract:

It is well established that the states of knowledge and belief have been captured using systems of modal logic. Referred to respectively as epistemic and doxastic modal logics, they have been studied extensively in the literature. In a relatively recent paper entitled ‘The Logic Of Being Informed’, Luciano Floridi does the same for the state of being informed, giving a logic of being informed also based on modal logic. In this information logic (IL), the statement Iap stands for ‘a is informed that p‘ or ‘a holds the information that p‘. After a review of Floridi’s logic of being informed, including an explication of the central concept of semantic information, I go on to develop a complementary logic of being misinformed, which formally captures the relation `a is misinformed that p‘.

A New Introduction to Modal Logic erratum

Just noticed a significant and potentially confusion causing error regarding multiply modal logics in this nonetheless excellent text. On pages 217-218 in my 1996 edition Hughes and Cresswell write:

For instance we might have a necessity operator L_{1}, say, which is stronger than L_{2} in the sense that L_{1} p \supset L_{2}p. The canonical model for such a system would obey the restriction that for all w, w' \in \text{W}, if w\text{R}_{1}w' then w\text{R}_{2}w'.

Now, that should be for all w, w' \in \text{W}, if w\text{R}_{2}w' then w\text{R}_{1}w'.