In the previous post, I showed how a truthlikeness semantic information measure could be applied in order to get a basic way in which knowledge can be quantitatively measured. This advantage set the truthlikeness approach apart from inverse probabilistic approaches to quantifying semantic information.
In looking at a way to work out something similar for quantitatively measuring beliefs, it occurred to me that things are the other way around; it is the inverse probabilistic approach instead which is to be applied.
Once again, take a propositional logical space with 2 atoms, p and q. There are 4 possible states and each state is assigned an a priori logical probability of 1/4, as listed in the following truth table:
State | p | q | Pr(State) |
w1 | T | T | 1/4 |
w2 | T | F | 1/4 |
w3 | F | T | 1/4 |
w4 | F | F | 1/4 |
Say w1 is the actual state; p and q are both true.
Let info() represent an information function, which given a logical statement returns its information yield numerical measure. Furthermore, let infoTr() represent a truthlikeness version of such an information function and let infoPr() represent an inverse-probabilistic-style version of such an information function.
infoTr(p & ~q) < infoTr(p & q) and infoPr(p & ~q) = infoTr(p & q). Yet belief of p & ~q is quantitatively the same as belief of p & q (B(p & ~q) = B(p & q)). Although one belief is true and the other false, just as much is believed in both cases. So infoPr() is a better reflection than infoTr() of what is going on here.
In fact, I think that infoPr() measures something, but not semantic information. The belief represented by B(p & q) is belief of the semantic content p & q. Therefore I propose that infoPr() be seen as a measure of semantic content (meaningful, well-formed data), rather than a measure of semantic information (meaningful, truthful well-formed data).
1 thought on “Quantifying Information to Quantifying Beliefs”