Jason Bridges

University of Chicago

Phil 33401, A02—Lecture 7 notes

McGinn, “The Structure of Content”


This is a very abstract discussion, and so hard to get a grip on.  Also covers much ground.  I won’t go over all of it.  Just stuff directly relevant to our issue. 


I.  Options if we accept externalism and causal localism

Recall the worry: if we are content externalists we must be content epiphenomenalists.  That is, if one’s having a belief with a particular content is not an internal property of one, it follows that the belief’s having that content cannot play any role in explaining one’s actions.  Just as the value of a coin—an external property of that coin—is not explanatorily relevant to explaining why putting the coin in a vending machine causes it to produce a coke, so the content of a belief is not relevant to the explanation of the belief’s causing what it does.


Why?  Because:

            Argument from content externalism to content epiphenomenalism:

A. Content externalism: That a propositional-attitude state has a particular content is an external property of that state.

B. Localism about causal explanation: Only internal properties of an item can play a role in explaining why it causes what it does.

Therefore C. Content epiphenomenalism: That a propositional-attitude state causes the behavior that it does is not explained by its content.


Some quick clarificatory points:

1.      Note that this formulation makes explicit a shift that happened in the background last time when we discussed the use to which Dretske puts his vending machine analogy.  We have shifted from speaking of a person’s having a propositional attitude with a particular content as being an external property of that person to speaking of a propositional-attitude state’s having a particular content as being an external property of that state.  The works we are now reading take for granted that content externalism can appropriately be formulated in the latter way, and we will need for the moment, in trying to understand this work, take that for granted too.  But as we’ll see, this assumption begs an important question.

2.      Note that externalists might not claim that all content possession is a matter of external properties, we might have only a selective content epiphenomenalism.

3.      Is accepting content epiphenomenalism tantamount to accepting that beliefs and other propositional attitudes aren’t causes of behavior?  No.  Recall the plausible view that objects and events are causes and that properties causally explain, and recall how it might be thought applicable to the case of beliefs and other propositional attitudes (conceived as object-like ‘states’).
Granting this, we can make a distinction.  It’s one thing to say that my belief that the Giants lost caused my weeping.  It’s another to say that it caused my weeping because it had the content that the Giants lost.  We can deny the latter and still endorse the former.


But then why does my belief cause the weeping?  We don’t want to say that this is just a miracle, that there’s no explanation for it at all.


Suppose we accepting premises A and B, there are three options we might take


To spell them out, need another piece of terminology.  Recall that a content externalist need only hold that some beliefs have an externally constituted content.  Thus we may distinguish

            A content is wide iff having a propositional-attitude state’s having this content is an external property of it.

            A content is narrow iff having a propositional-attitude states’ having this content is an internal property of it


The first, mentioned last time.  One might hold that a belief has a property other than its content, which is internal and which explains the belief’s causing what it does.


            Options if we accept premises A and B:

      1. A propositional attitude has both a content and an internal property of some specified sort, and this internal property is what explains behavior.


We might consider this, as McGinn does, on the analogy of the distinction between the syntax and semantics of a sentence.


Suppose I make a machine that can perform simple tasks and has a voice recognitional capacity.  Say if you say, “Turn on the lights” it turns on the lights in the apartment, and so forth.

Now, the machine is just a machine, and a fairly simple one at that.  It’s not a thinker.  It has no mind.  So there’s no literal sense in which it understands the sentence you utter.  It would be a bad explanation of the machine’s subsequent behavior to say that it hears the sentence, understands what it means in English and proceeds to perform the ordered calculation.

Rather, the machine is so programmed that it responds to certain noises in certain ways.  A complete explanation of the machine’s behavior will explain how it converts sound waves to certain electric signals and then is so wired as to do such-and-such given that input.  What the sentence means in English is no part of the explanation of this process.  What the sentence means may explain why we people, who understand English, programmed the machine in such and such a way.  But given that the machine is programmed as it is, the word’s having the meaning that they do is no part of the explanation of why those noises produce that behavior.  If tomorrow English undergoes a radical change and “Lights on” no longer means lights on, the machine if it’s left as it is will still respond to those noises in the way it always has.


Now, we may think of the shape of the expressions in a sentence and of the order of these expressions as broadly speaking the syntax of that sentence (Not quite accurate, but doesn’t matter).  Analogously we may think of the order of sounds in a spoken sentence along with those sounds’  ‘acoustic shape’ (if you will), as the syntax of the spoken sentence.  We’ve already noted that to speak of the semantics of something is to talk about its meaning.

Hence we may put the point we’ve just arrived at as follows (a la Dennett).  Computers are syntactic engines, not semantic engines.  They’re not sensitive to the meaning of an input, just to its syntax.  And to say they’re not sensitive to the meaning of an input is to say that the meaning of the input plays no role in the explanation of what behavior the input causes on the part of the computer.

Of course, we program computers so that they mimic the behavior of semantic engines.  That is, we program them to turn on the lights when we say something in English that means turn on the lights.  And so if we know a program’s well-programmed and we know the meaning of the input, we can predict its behavior.  That’s how those of us who do not know the details of a computer’s software and hardware can still use them.

We can do this because semantics correlates with syntax.  That is, a sentence’s having a certain shape, say “turn on the lights” correlates with its having a certain meaning in English, namely, turn on the lights.

But this is just a case of what should by now be familiar from Dretske, of the fact that knowing something has a certain property can enable us to predict what’s going to happen without that property explaining what’s going to happen.  That’s because there’s a reliable correlation (what Dretske calls weak supervenience) between the property in question and the property that really explains what’s going on.  Just as the value of a coin correlates with its size and shape, so the meaning of a sentence correlates with its shape.


So if we accept content epiphenomenalism, then we aren’t semantic engines either.


Many cognitive scientists and philosophers take this particular analogy very seriously.  Suppose we think of having a belief as a matter of having a sentence in one’s head.  So if I believe that the Giants lost, there’s a sentence in my head that means the Giants lost.  It needn’t mean that in English; it might mean it in some special mental language.  Now if we accept content externalism, we accept that that internal sentence’s having the meaning that it does is an external fact, a matter, say, of various causal relationships between my brain and the environment.

And so if we accept Dretske’s claim that causation is local, which amounts to the denial that external properties of things play a role in explaining what these things cause, we can’t say that the content plays any role in explaining whatever that belief causes.

But the syntax of the internal sentence is a local, a non-external property of it.  So it’s open to hold that the sentence’s having the syntax that it does explains why the belief causes what it does.

And since semantics correlates reliably with syntax, knowing the meaning of the internal sentence—which on this view is the content of the belief—might enable one to predict the behavior of the person who has that belief, even if it can’t explain it.


Jerry Fodor has a view like this.  One question that immediately arises is what it means to talk about sentences with syntax inside our heads.  It’s not as if there literally are sentence-shaped things popping around inside the brain.  Fodor has a little story about how to think of syntax as neurophysiologically implemented.


Another view sees the explanatorily relevant internal property not as syntactic but as straightforwardly neurophysiological.


However spelled out, this kind of view has the consequence that we’re not semantic engines.  So option 1 involves accepting the argument’s conclusion: content epiphenomenalism.


II. Content as constitutively duplex

But suppose we want to accept the premises but wish to deny the conclusion.  Is that possible?  Can we deny that the inference from A and B to C is valid?


Yes.  One option is to endorse the following view:

2. If a propositional attitude has a wide content, it also has a narrow content.


This is a popular view, but we won’t discuss it.  Instead we’ll look at McGinn’s closely related view:

3. Content as constitutively duplex.  A propositional attitude has a single content, but that content has wide and narrow components.


This enables McGinn to accept externalism and causal localism but reject content epiphenomenalism.  We don’t need some non-semantic feature, like syntax, to explain why a propositional attitude causes what it does.  Having the content that p is an external property of a belief.  But that is true only in virtue of one two components of the content.   Having the content that p is also a causally relevant property of that belief.  This is so in virtue of the other component of that content.


What does this mean?  McGinn’s basic thought is that the ordinary idea of thought-content is an amalgam of two different things: it is on the one hand the idea of that in virtue of which a belief or thought is true or false, and on the other it is that aspect of a thought or belief that explains behavior.

Both of these ideas are familiar to us.  We discussed the relationship between content and truth-value back at the beginning.  A proposition, recall, is something that is true or false.  California is not a proposition.  But what is expressed by the whole sentence, “California is a big state,” is.  We spoke about how propositional attitudes are so-called because they are attitudes to propositions, how speaking of the propositional content of a propositional attitude was just another way of speaking about the proposition is was an attitude toward, and how talk of the content of a propositional attitude is just short for talk of its propositional content.

When McGinn speaks of the idea of content that is associated with truth-conditions, that’s what he’s talking about.  The content of a belief in the sense we’ve just been discussing determines the condition under which it is true.  My belief that CA is a big state is true under the condition that CA is a big state.  Sometimes people just speak of truth-conditional content instead of propositional content.


And the idea the content of a belief or other propositional attitude is what explains why it causes the actions it does we have seen to be the basic assumption of rational psychology.  People do what they do, according to rational psychology, because of what they believe, desire and so forth, and what one believes is, of course, the content of one’s belief.


Now, McGinn grants that to accommodate the truth-evaluable (propositional) character of content, we need to assign a constitutive role to relationships to the external world.  Put propositional content can be understood as just one component of content.  There might be another component of content, whose character is not externally constituted, and which might then be the thought to be the component of content in virtue of which citing content can help explain behavior.

Here’s a bad analogy.  A joke has both a setup and a punch line.  The former prepares one for the surprise, the latter provides the surprise.  Now someone who was very confused about jokes might think, how could the same one thing both prepare for and provide a surprise?  Impossible.  The answer of course, is not that jokes don’t do both of these things—they do—but that different aspects of them perform the different tasks.  Similarly for McGinn, content is that which is true or false, and content is that which explains behavior, but different aspects, different components of content, do the two things.  (pp.210-211.)


McGinn’s account of the second of the two components of content is, as he acknowledges, very brief and vague.  I’ll give an even brief and more oversimplified account of the kind of thing he has in mind.


Suppose the following is so about a person Bob.  He is caused to believe that a spider is present whenever he has a spider-like visual experience, and whenever he has that belief he is caused to flee.  Let’s say this is a complete description of the role of the belief that a spider is present in Bob’s mental life.  (He’s a simple guy).


So we can say:

Two components of content of Bob’s belief:

1.      Propositional content: that there’s a spider here.

2.      Causal role: caused by having a spider-like visual experience, causes fleeing


Now what McGinn wants to say is that the belief doesn’t have the causal role it does in virtue of its propositional content, bur rather that its having this causal role simply constitutes one distinct component of its content


The relevant idea of a causal role is familiar from functionalism, with which many of you are familiar.

Briefly: what makes something a carburetor?  Is it that it has a particular size or shape?  No. Carburetors can come in a variety of sizes and shapes and materials.

What it is to be a carburetor is nothing more or less than playing a particular causal role in a car engine.  Something is a carburetor, whatever its shape and what have you, so long as it mixes air and gas and sends it to some other chamber within which the spark plug ignites the mixture and causes the piston to move, etc.

Being a carburetor is what is called a functional property—a property that an object has in virtue of playing a certain causal role in a larger system.


McGinn’s idea is that an inner item’s playing a certain causal role can simply constitute one component of that item’s content.


Now, clearly Bob’s belief’s having this causal role is relevant to the explanation of why Bob flees when he has the belief.  So this kind of content is explanatorily relevant.

How is this consistent with localism about causation?  In fact, having a certain causal role in a system is, strictly speaking, an external property of an item.  But the thought is that the belief’s having this causal role can supervene wholly on local properties of Bob’s inner machinery.


Imagine Twin-Bob, on a planet where everything’s the same but instead of spiders there are schmiders, which appear strikingly similar to spiders but are a different species...

Twin-Bob’s belief will have a different propositional content than Bob’ belief.  But it can be understood as having the same causal role.  For spider-like visual experiences are identical to schmider-like visual experiences, and fleeing on Twin-Earth is the same as fleeing here at home.


To bring the point out another way.  Suppose we can explain the having of a spider-like visual experience in terms of certain patterns of stimulation to the eye.  And suppose we can explain fleeing in terms of certain motions of the legs.  Then both the input and output of the causal role are explained wholly in terms of internal properties of Bob.  And so surely the belief’s having this causal role itself supervenes on internal properties of Bob.


We might think of having a belief along Fodorian lines, as tokening a sentence.  Since tokening a sentence in the internal language is to be understood ultimately in neurophysiological or computation terms, it’s an internal property.  And the thought is that we can explain why tokening that sentence is caused by what it does and causes what it does wholly in neurophysiological or computational terms.  Bob and Twin-Bob, given that they’re doppelgangers, token the same sentence.  And in both cases the sentence has the same causal role.


So here’s an aspect of content that’s wholly internal but explanatory.  So rational psychology, which requires that content be explanatorily relevant to behavior, is vindicated.  Or is it?

Tune in next time.