## Archive for May, 2011

### Towards a Theory of Semantic Communication

Tuesday, May 31st, 2011

### Confirmation Measures and Transmitted Information

Monday, May 30th, 2011

The following Bayesian confirmation measure is associated with John Maynard Keynes, having appeared in his A Treatise on Probability (1921). The degree to which evidence e confirms hypothesis h is given as:

$R(h,e) = \text{ln}(\frac{p(h | e)}{p(h)})$

Interestingly, this is strongly reminiscent of a subsequent measure found in Shannon information theory. In philosophy literature, this formula can be found in Dretske’s formulation of information transmission derived from Shannon’s work as well as a measure of transmitted information given by Hintikka.

### Entropy Is Universal Rule of Language

Wednesday, May 18th, 2011

### What Does CONT() Measure?

Sunday, May 15th, 2011

I have been reading over Luciano Floridi’s recently released The Philosophy of Information. Chapter 5 is basically his paper Outline of a Theory of Strongly Semantic Information (TSSI).

Under such a theory, the Bar-Hillel/Carnap CONT(s) measure, associated with a Theory of Weakly Semantic Information (TWSI) does not provide an indication of the amount of informativeness of a statement s. But rightly so, “given the usefulness of TWSI, CONT(s) should probably be salvaged, if possible”.

If so, then what does CONT(s) really purport to indicate? Floridi writes that “[CONT(s)] does not indicate the quantity of semantic information but, more precisely, the quantity of data in [s]” (pg. 128.).

I think that I agree with this point, but would qualify that it indicates the quantity of meaningful data (i.e. semantic content).

In an earlier post, I mentioned that “[CONT()] be seen as a measure of semantic content (meaningful, well-formed data), rather than a measure of semantic information”.

### A Gettierised (Russellised) Historical Fact?

Wednesday, May 11th, 2011

I finally dug up my copy of Russell’s Human Knowledge: Its Scope and Limits and located the passage in which he gives the Gettier-like broken clock example (15 years before Gettier’s paper):

It is very easy to give examples of true beliefs that are not knowledge. There is the man who looks at a clock which is not going, though he thinks it is, and who happens to look at it at the moment when it is right; this man acquires a true belief as to the time of day, but cannot be said to have knowledge.

This passage is found in Section ‘D. Knowledge’, at the end of the Chapter ‘Fact, Belief, Truth, and Knowledge’.

Most would agree that this is not a case of knowledge. But how far can we carry this? Take the following example: A famous historical figure (X) dies and a medical staff member in attendance records X‘s exact time of death using a clock hanging on the wall. This clock is also broken but happens to be stuck on the actual time, say, 6pm. Now, although the staff member records a fact (a true proposition), they do not actually know that X died at 6pm. Furthermore, by standard Dretskean information-theoretic epistemology, neither are they informed by the clock that X died at 6pm.

If the staff member does not have knowledge of nor are they informed of the time of death, is what they record a piece of information? Can a recorded fact be information or knowledge if the source of that record neither was informed of nor knew the proposition in question? If their record is used in a biographical book on X, can someone who reads this book 100 years later come to know that X died at 6pm?