Some Offhand Commentary on Fred Dretske’s Knowledge and the Flow of Information

Fred Dretske’s Knowledge and the Flow of Information is ultimately an attempt to use the notion of information to explicate knowledge. As part of this enterprise, several philosophically interesting issues are tackled. The first of these is the development of a semantic theory of information. After establishing a connection between information and knowledge, Dretske goes on to apply his ideas to key philosophical areas such as perception and intentional content or meaning. Here I take a look at his account of information.

Dretske expresses a theoretical definition of a signal’s (structure’s) informational content in the following way:


A signal r carries the information that s is F = The conditional probability of s‘s being F, given r (and k), is 1 (but, given k alone, less than 1)

Here k is a variable that takes into account how what an agent already knows can determine the information that a signal carries to the agent. For example, the conditional probability of A given A v B is not 1. However, if an agent already knows that ~B, then the signal A v B carries the information that A.

As an example of how the core notion of probability works here, take the information bearing signals of a clock. A correctly functioning clock gives information on the time. In accordance with the above definition, if one looks at a working clock and reads 6:30pm, then the conditional probability that the time is 6:30pm given the clock’s signal of 6:30pm is 1. If the clock were malfunctioning, things would be different. Say that the clock battery becomes flat at the time of 6:30pm. The next day someone happens to look at the clock at 6:30pm. Even though the time indicated by the clock happens to correspond correctly to the actual time, the clock here does not give the information that the time is 6:30pm because the conditional probability is less than 1. Using a uniform probability distribution and the fact that there are 1440 minutes in a day, the probability that the time is 6:30pm given the non-functioning clock’s signal of 6:30pm is 11440

There is a bit of space to tinker with Dretske’s definition of information. What is the nature of the concept of probability that figures so centrally in his definition? Dretske is somewhat agnostic and suggests that an information theoretic epistemology is compatible with different interpretations of probability. One can interpret it as degree of rational expectation (subjective), or (objectively) as limiting frequency or propensity. In developing his information-based account of knowledge he assumes, without arguing for, an objective interpretation. But he states that though ‘there are strong reasons for preferring this approach’, ‘the probabilities can be given a subjective interpretation with little or no change in the formal machinery’. What would change are the epistemological consequences.

So given the adoption of an objective interpretation of probability, what can be made of Dretske’s definition? Although information is something that is out there in the world, an objective phenomenon, the informational carried by a signal and our treatment of the term information need to incorporate in certain ways the epistemic states and backgrounds of the agents that exploit the information. Dretske does this in one way (as he puts it, ‘makes a minor concession to the way we think and talk about information’), by having the variable k (representing the agent’s background knowledge) as part of the definition.

As mentioned earlier, this relativisation does not undermine the essential objectivity of the information and a simple division resulting in the terms ‘relative information’ and ‘absolute information’ can provide conceptual clarification here. To emphasise the essential objectivity of information one could even substitute the reference to k with something else. For example, consider again the shells and peanut scenario. One could say that it is not just the uncovering of shell 2 (plus k), subsequent to shell 1 which provides the information that the peanut is under shell 3. Rather, it is the uncovering of shell 1 and shell 2 that provides this information. Or it is the uncovering of shell 2, given the uncovering of shell 1 which provides this information. So this collection of signals here would be treated as one entity, and it is this temporal-spanning entity that is the complete signal. This is a way of thinking about things that eschews reference to an agent’s background knowledge.

Aside from the probability requirements between source and signal in Dretske’s definition, the requirements given for cases in which a signal is absent bear consideration. Something like the following modified definition might be a good idea:


A signal r carries the information that s is F = The conditional probability of s‘s being F, given r (and k), is 1 (but, given k alone, s is F can not be derived in the agent’s information base)

As can be seen, for the main part this modified definition and Dretske’s original are the same. However, the difference in how they treat cases of an isolated k in the absence of a signal is significant. The modification provides greater flexibility and the means to avoid some problematic aspects of Dretske’s account.

It is an interesting aspect of Dretske’s account that logical, mathematical or analytic truths generate no information. For example, the structure A v ~A would carry no information because the conditional probability of A v ~A given k alone is 1; there is no need for a signal to make this probability 1. It is also interesting that nomic necessities, such as the identity H2O = Water, carry no information for similar reasons. Whilst the treatment of logical, mathematical and analytic truths as containing no information is justifiable, the treatment of nomic necessities as containing no information is particularly problematic.

If one comes across a signal which indicates that a particular morning star they observe (i.e. Venus) is the same as a particular evening star they observe, then it is fair to say that the signal they received was informative. With this modified definition, the informativeness of the signal ‘Morning Star’ = ‘Evening Star’ can be accommodated for. This is because given k alone, ‘Morning Star’ = ‘Evening Star’ is not necessarily derivable in an agent’s informational base. No system of logic which underlies a standard agent’s informational base is going to have ‘Morning Star’ = ‘Evening Star’ as an axiom or derivable truth.

In general this modified definition allows for greater flexibility. By introducing an underlying logical system into the picture, the information carried by a signal and any attributions of vacuity to it can be determined by the logical system used. A simple yet good example would be a signal p v ~p. According to Dretske’s account, since the probability of p v ~p is 1 given k alone, such a signal would carry no information. Similarly, if the logical system of an agent’s informational base is classical logic, then the signal p v ~p carries no information. However, if a system of classical logic were to be replaced by a system of intuitionistic logic, then the signal p v ~p could be informative.

Another possible modification to the definition is considered for the following reason. A signal of smoke (S) carries the information of fire (F) if the agent has the information that smoke implies fire (S -> F). The previous definitions do not discriminate between cases where the agent does and does not have S -> F as a piece of information. Of course, objective environmental information understood as regularity within a system exists independent of and prior to any informee. But the information is only informative if an agent is aware of certain, what Barwise and Seligman call, constraints.

Something like the following modification would look after this aspect:


A signal r carries the information that s is F = The conditional probability of s‘s being F, given r (and k), is 1 and that s is F is inferable using r and k (but, given k alone, s is F can not be derived in the agent’s information base)

Once again, one could perhaps technically substitute this new reference to k here with a collection of time-spanning signals. It is not just the signal of smoke in this instance plus the knowledge of the constraint S -> F that carries the information of fire. It is this particular signal plus the collection of past signals which led to knowledge of the constraint S -> F. Such signals could have been the utterance by someone that smoke is caused by fire, or a repeated sequence of fire and smoke occurrences which led to the knowledge that fire causes smoke.

6 thoughts on “Some Offhand Commentary on Fred Dretske’s Knowledge and the Flow of Information

  1. Hi there, I just came across your blog whilst searching on google for an answer to how Dretske saw the relationship between information and meaning. Your blog looks very interesting, but a lot of it was over my head, as I’m a complete beginner on these issues. So I was just wondering if you have any thoughts on the following two things:

    1) How does Dretske’s define meaningful information content?
    2) Is this something that always requires an interpreter (conscious agent, or otherwise)?

    I got the impression from your post that under Dretske’s concept of information, *an agent* (rather than the information system) could determine whether a signal carried meaningful information or not? So, for example, if an agent already knew that ‘x’ correlates with ‘y’, then seeing an instance of ‘x’ would carry no information content. This suggests that the information content was meaningless. Is that right?

    Any thoughts on this would be grand! (And nice blog!)

  2. 1) There are two senses of the word meaning. In the first sense, for example ‘smoke means fire’, the term ‘meaning’ is synonymous with ‘indicates’ of ‘is a sign of’, so ‘smoke indicates fire’. Paul Grice dubbed this informational kind of meaning natural meaning.

    This contrasts with a more general semantic sense of meaning, where, for example, ‘fire means fire’. If a semantic agent receives the information that there is smoke, then they understand that this indicates there is fire. They also have semantic representations for these concepts and meaningfully interpret what is involved.

    2) Check out sections 1.7 (Genetic neutrality) and 1.7.1 (Environmental information) in the Semantic Conceptions of Information entry at the Stanford Encyclopedia of Philosophy.

    As for your last point, I think that if an agent already knows that x indicates y, then a signal x would carry no information about y to the agent. Dretske adds the k variable into his definition of information flow to accommodate the way we think about information; one cannot be newly informed about something that they are already informed about. But this does not mean that the information content was meaningless. x still indicates y and x is still semantically meaningful.

    Here is a useful article written by Dretske.

Leave a Reply

Your email address will not be published. Required fields are marked *