Part of my current research has focused on the quantification of semantic information, on ways to measure the semantic information yield of logical statements. Instead of the somewhat standard Bar-Hillel/Carnap/Hintikka inverse probabilistic approach, I have opted to quantify semantic information using the notion of truthlikeness. The former is associated with a Theory of Weakly Semantic Information (TWSI), `weakly’ because truth values play no role in it. The latter is associated with a Theory of Strongly Semantic Information (TSSI), according to which information encapsulates truth. TSSI is associated more generally with the veridicality thesis, that semantic information is meaningful, well-formed data that is also true. See On Quantifying Semantic Information for more on this.
Adoption of a veridicality requirement has a number of advantages over a theory of semantic information in which information does not imply truth, in which truth values only supervene on information. For one, it provides a foundation for attempts to define knowledge in terms of information, as both encapsulate truth. Further to this, it occurred to me that measuring semantic information in terms of truthlikeness rather than inverse probability leads to a basic way in which knowledge can be quantitatively measured.
Take a propositional logical space with 2 atoms, p and q. There are 4 possible states and each state is assigned an a priori logical probability of 1/4, as listed in the following truth table:
State | p | q | Pr(State) |
w1 | T | T | 1/4 |
w2 | T | F | 1/4 |
w3 | F | T | 1/4 |
w4 | F | F | 1/4 |
Say w1 is the actual state; p and q are both true.
Let info() represent an information function, which given a logical statement returns its information yield numerical measure. Furthermore, let infoTr() represent a truthlikeness version of such an information function and let infoPr() represent an inverse-probabilistic-style version of such an information function.
For both infoTr() and infoPr(), it is the case that info(p & q) > info(p v q). This result founds and makes mathematically precise the reasonable claim that knowledge of p & q is greater than knowledge of p v q: K(p & q) > K(p v q).
It is also fair to say that K(p v q) > K(p v ~q). This agrees with infoTr(), according to which the information yield of p v q is greater than the information yield of p v ~q: infoTr(p v q) > infoTr(p v ~q). On the other hand, infoPr(p v q) = infoPr(p v ~q), which corresponds to the unwanted K(p v q) = K(p v ~q).
In this way, quantitative accounts of information based on the notion of truthlikeness lead to quantitative accounts of knowledge.
1 thought on “Quantifying Information to Quantifying Knowledge”