In the previous post, I showed how a truthlikeness semantic information measure could be applied in order to get a basic way in which knowledge can be quantitatively measured. This advantage set the truthlikeness approach apart from inverse probabilistic approaches to quantifying semantic information.

In looking at a way to work out something similar for quantitatively measuring beliefs, it occurred to me that things are the other way around; it is the inverse probabilistic approach instead which is to be applied.

Once again, take a propositional logical space with 2 atoms, *p* and *q*. There are 4 possible states and each state is assigned an *a priori* logical probability of 1/4, as listed in the following truth table:

State | p |
q |
Pr(State) |

w1 | T | T | 1/4 |

w2 | T | F | 1/4 |

w3 | F | T | 1/4 |

w4 | F | F | 1/4 |

Say w1 is the actual state; *p* and *q* are both true.

Let info() represent an information function, which given a logical statement returns its information yield numerical measure. Furthermore, let info_{Tr}() represent a truthlikeness version of such an information function and let info_{Pr}() represent an inverse-probabilistic-style version of such an information function.

info_{Tr}(*p* & ~*q*) < info_{Tr}(*p* & *q*) and info_{Pr}(*p* & ~*q*) = info_{Tr}(*p* & *q*). Yet belief of *p* & ~*q* is quantitatively the same as belief of *p* & *q* (B(*p* & *~q*) = B(*p* & *q*)). Although one belief is true and the other false, just as much is believed in both cases. So info_{Pr}() is a better reflection than info_{Tr}() of what is going on here.

In fact, I think that info_{Pr}() measures something, but not semantic information. The belief represented by B(*p* & *q*) is belief of the semantic content *p* & *q*. Therefore I propose that info_{Pr}() be seen as a measure of *semantic content* (meaningful, well-formed data), rather than a measure of semantic *information* (meaningful, *truthful* well-formed data).

## 1 thought on “Quantifying Information to Quantifying Beliefs”