What’s Wrong with the World

The men signed of the cross of Christ go gaily in the dark.

About

What’s Wrong with the World is dedicated to the defense of what remains of Christendom, the civilization made by the men of the Cross of Christ. Athwart two hostile Powers we stand: the Jihad and Liberalism...read more

Misinformation campaign

In response to my most recent post on Alex Rosenberg, a philosopher emails the following comment:

Rosenberg has to know that, in the technical sense, there is no such thing as "misinformation." The metal bar dipped in a saline solution that proceeds to rust can't be "misinformed" about its environs because information just is causal covariation among physical states. His use of that term is a blatant attempt to smuggle intentionality in through the back door while pretending not to; why, why, oh why! won't anyone of note call him out on this transparent attempt to bulls**t his way out of the corner he's painted himself into?

This is an extremely important point that I should have emphasized in my post. What my correspondent is referring to here is sometimes called the “misrepresentation problem” for naturalistic theories of meaning. Suppose the naturalist claims that for A to represent or contain information about B is just for A to have been caused by B in such-and-such a way. In that case, how is it possible for us ever to misrepresent anything? Suppose Fred thinks he sees a dog in the distance when in fact what he is looking at is a cat. How can his perceptual experience (mis)represent what he is seeing as a dog since it was not a dog that caused it?

One well-known attempt to get around this problem is to appeal to the “teleological function” served by a representation, where a “teleological function” is to be understood on the model of a biological function. The heart serves the biological function of pumping blood, and that remains its function even if in some particular context it is not actually carrying out that function – say because Hannibal Lecter is using it for his supper. Similarly, if the function of some brain process is to represent dogs, it will do so even if in some particular context something other than a dog triggers it.

Various technical objections might be raised against this reply, but the central problem is this. The whole point of “naturalistic” theories of meaning or representation is to find a way to account for meaning or representation given a mechanistic, non-teleological conception of the natural world. Aristotelian teleology or final causation is supposed to be chucked out the window and a stripped down version of what Aristotelians call “efficient causation” is supposed to do all the explanatory work that needs to be done. So how can such a theory coherently appeal to the notion of “teleological function"? The answer, as it happens, is that “teleological function” is in turn something naturalists have tried in other contexts to give a “naturalistic account” of. And these “naturalistic accounts” always end up attempting to reduce teleology to some pattern of efficient causation or other. There are various technical problems with these accounts too. But the key point is this: When naturalistic philosophers of mind find that they cannot account for everything in efficient-causal terms they often tend to resort to teleological language; and when called on to account for such language they insist that it can be cashed out in non-teleological or efficient causal terms. (Something similar occurs, incidentally, in the use philosophers of biology make of the notion of a “biological function.”) It is sheer sleight of hand, a circular farce of the sort I’ve already called attention to in earlier posts. As I argue at length in The Last Superstition, recent “naturalistic” theories of mind, of biological function, and of other phenomena problematic for a mechanistic conception of nature invariably either lapse into this sort of incoherence or implicitly acknowledge that something like Aristotelian formal and final causes are real after all.

Now of course, Rosenberg holds that we need ultimately to eschew any talk of “meaning,” “representation,” and the like in any event. But that only makes his reference to “misinformation” more baffling, not less. It’s bad enough that he uses “information” talk as if it could plausibly ground a reconstruction of or “successor” to the concept of knowledge when it is entirely stripped of any intentional connotations. All we have in that case is bare causal relation between A and B, with no explanation of why we should refer to the one as containing “information” about the other in the absence of any intentionality either intrinsic to the physical facts themselves or derived from an outside observer. But at least we have that much. What do we have, though, when there isn’t even a causal relation between A and B for the simple reason that B doesn’t exist? In what sense does A contain “misinformation” about B when A is not only devoid of either intrinsic or derived intentionality, but was not even caused by B in the first place?

Perhaps Rosenberg has an answer to such questions, but if so he does not give us the slightest hint as to what it is, or even acknowledge that there is a question to answer in the first place. Instead he simply dismisses as “puerile” any suggestion that eliminative materialism might be incoherent. You see, “17th century physics” “ruled out” any appeal to purposes, so there simply must be a non-purposive explanation available for any phenomenon; and because there is always such an explanation available, we know that 17th century physics was right to rule anything else out.

Who says merry-go-rounds are just for kids?

(cross-posted)

Comments (18)

It strikes me that some of the above argument is what we indirect realists bring up against direct realists. Roughly, if there is no dog there, but you think you see a dog, you can't be in direct contact with a dog when you have a visual experience that leads you to believe that there is a dog. Therefore, your contact even with a real dog is mediated by something that can exist in the absence of the dog. Therefore, direct realism is false.

I could well imagine that naturalists might try to be direct realists. But what about A-T folks? I had thought that they were direct realists as well.?? I'll be glad to find out that I'm wrong on this, though.

Lydia, I am not sure where a sensible form comes in under your line-up, but I would have thought that it mediates between the subject of which it is the form and the sensing subject which receives the sensible form. This would seem to suggest that A-T folks are indirect realists, if I follow the distinctions properly.

A-T folks are direct realists, but it must be emphasized that what this primarily means for them is that what the perceiver perceives is the external object itself and not a percept, sense datum, etc., even if he perceives the object only by means of a percept. And I suspect that many people who call themselves indirect realists in fact themselves mean only that -- that we perceive external objects by means of percepts of them, not that it is the percepts themselves that we perceive.

So, where exactly the conflict might be is not so obvious, especially since A-T writers also disagree among themselves about the status of secondary qualities. What is clear is that any account which presupposes that matter can be exhaustively described in terms of quantity alone so that qualitative properties must be mind-dependent, or in any other way assumes a mechanistic view of matter, would be rejected by A-T. A-T would also reject any account that entails a Cartesian "representationalist" account of the mind in general, since it does not think of thoughts as "inner representations." And A-T hostility to indirect realism is largely motivated by hostility to these modern assumptions.

But while modern theories of perception have been largely motivated by modern philosophical assumptions like the ones in question, they haven't been entirely motivated by them. So, as I say, there may be more common ground than at first appears. But I haven't worked out a complete view on this yet myself.

I once read William Lycan say that the qualia problem was hard, but the intentionality problem is the real doozy for naturalists. I never understood why until I read this post. Thanks for that!

I can only imagine that an EMer would try to do with information what Searle does with function. As I learned from TLS, Searle says no organs have functions, but that we just project functions onto them. Now obviously, Rosenberg can't talk about projection, but he could perhaps talk about likes and dislikes (maybe in terms of serotonin discharges). So he could say there is no such thing as misinformation, there are only states of affairs in reaction to which human organisms have serotonin discharges (or whatever) and states of affairs in reaction to which human organisms do not have serotonin discharges.

That's the best I could come up with, and I realize it's ridiculously awful. I can't believe I'm still afraid of this view, and yet I am!

Lydia
I'm not in your leauge or Ed's but the example that ed gives would seem to be mistake in judgement, the man simply didn't have enough information avaliable (due to the distance) to make the distinction between the dog and cat.

Ed
In previous posts on natrulism you've said that our metaphysics must precede epistomology (and that since the infamous decartes people have done it back to front), would I be right to assume that that is the way to send the cartisien demon packing?

Happy Gautade Sunday

I would love to hear a discussion about direct and indirect realism. To be perfectly honest, I've never understood direct realism. It's always seemed to me straightforwardly incoherent, due to the argument from illusion, which I know has supposedly been refuted.

But just to be clear: when a person hallucinates, the direct realist says that she perceives [X] by means of representation R? What does the "X" stand for? Is it the object she's really looking at? So if I stare at the computer, and hallucinate my cat, then I perceive my computer by means of a representation of my cat? Or do I just perceive nothing at all--i.e., I'm not in a perceiving relation at all? In which case, why do it seem qualitatively similar to perception if I'm not perceiving my representation of my cat?

"What do we have, though, when there isn’t even a causal relation between A and B for the simple reason that B doesn’t exist? In what sense does A contain “misinformation” about B when A is not only devoid of either intrinsic or derived intentionality, but was not even caused by B in the first place?"

Righto. And this is why people who think that intentionality can be accounted for in non-intentional terms have some interest in getting, for example, a non-intentional account of biological functions going. I'd be interested to see why you think that "it is sheer sleight of hand"... I assume this means that you have an argument any such account is logically impossible. You mention some book. Can you sketch the argument here?

Bobcat: when someone hallucinates he does not thereby perceive anything.

Once as a hungry, exhausted, half-asleep graduate student I hallucinated a small, green lizard climbing down my kitchen cupboard. (This is a true story.) Like Bobcat, I've never understood how a direct realist would describe this event.

But this is an excellent post on information and misinformation, by the way. I certainly don't mean to imply anything to the contrary.

Alex: Or he may perceive a little too much of everything. Along the lines of "there's so much that we know that ain't so."

"And this is why people who think that intentionality can be accounted for in non-intentional terms have some interest in getting, for example, a non-intentional account of biological functions going."

Rots o ruck. Just be sure to have the intentionalists oversee the work so that intentionality doesn't sneak in by the back door. We'll expect to see neurological results "real soon now." However, I wonder how long before non-intentionality is claimed to be "science", despite the fact that there's no neurological mechanism even proposed? Of course, there's always the philosophical Just-So stories.

I would love to hear a discussion about direct and indirect realism.

I second this.

As for these hallucination arguments, I’m not too impressed, since hallucinations presupposes an ability to accurate perceive reality. Without that ability, the term hallucination would be meaningless.

Alex,

"Bobcat: when someone hallucinates he does not thereby perceive anything."

Yeah, that's what you told me last time. As you can see, I remember that I was told things; I just don't remember what I was told.

Since I know you're super-up on the perception literature, could you explain to me again the problem with the argument from illusion? That is to say, when I perceive X, the direct realist says I perceive X via a mental state, but I don't perceive the mental state itself. However, when I hallucinate, I don't perceive anything via a mental state, and I don't perceive the mental state itself. So what is my relationship to the mental state of hallucinating? I don't *perceive* my hallucination--I *have* my hallucination. Is that the point?

I guess my response to that is that my relationship to a hallucination seems identical to my relationship to an honest-to-God's perception. And given the qualitative identity of the hallucination and the perception, it seems like in both cases I infer from the mental state to the object of the mental state, but with the case of the hallucination I infer wrongly. Obviously, you don't have that intuition, right?

...given the qualitative identity of the hallucination and the perception, it seems like in both cases I infer from the mental state to the object of the mental state, but with the case of the hallucination I infer wrongly.

I don't understand what you mean here. I guess that by 'seems like' you mean to indicate something like what I might express by 'ought to be concluded that'. But I don't know what this inferring from the mental state to the object of the mental state is supposed to be.

I would imagine that you mean performing an inference which begins with the premise that I am subject to such and such mental state and ends with some conclusion about what exists. I guess the conclusion would be something like: the object of my mental state exists. Is that what you mean?

But that would be super-weird. Think about the case of belief. Fred believes that President Obama exists. Obama is an object of Fred's mental state. And yet Fred has a possible phenomenal twin who believes falsely that President Obama exists. But it wouldn't follow from this that in entertaining Obama-thoughts Fred infers on the basis that he believes Obama exists that Obama exists. Anyway: why would he draw that inference? No need. He already believes Obama exists!

What this case shows (if I can be so bold) is that the mere fact that a veridical mental state has a non-veridical phenomenal twin doesn't entail that in being subject to the mental state we infer anything about the object of the mental state.

All right, guys, I'm sorry I started it, because it's not fair to Ed to move the thread in this direction. (But you infer the existence of B.O. from your _experiential evidence_, not from your _belief that he exists_, Alex. Don't you read many classical foundationalists?)

And this is why people who think that intentionality can be accounted for in non-intentional terms have some interest in getting, for example, a non-intentional account of biological functions going. I'd be interested to see why you think that "it is sheer sleight of hand"... I assume this means that you have an argument any such account is logically impossible. You mention some book. Can you sketch the argument here?

?? I already said what the sleight-of-hand at issue here consists in -- acknowledging that a purely efficient-causal account cannot succeed and needs to bring teleology back in, then claiming that all such teleology can in turn be reduced back to efficient causation. Sure, a full analysis would require getting into the details, which is why I alluded to "some book" in which I do just that.

Since I know you're super-up on the perception literature, could you explain to me again the problem with the argument from illusion? That is to say, when I perceive X, the direct realist says I perceive X via a mental state, but I don't perceive the mental state itself. However, when I hallucinate, I don't perceive anything via a mental state, and I don't perceive the mental state itself. So what is my relationship to the mental state of hallucinating? I don't *perceive* my hallucination--I *have* my hallucination. Is that the point?

Actually, this may not be true. This assumes a "unified" brain. In hallucinations, the brain is split into various subprocesses (subnetworks) where one network acts as a feeder to the perceptual processing network. The eyes merely process stimulations that are coded properly. Where these come from may be direct perception or memory-mediated playback. Neuromathematicians have been modeling hallucinations for many years. If you have the time, here is a nice lecture (it's about an hour) by J. D. Cowan, who has probably done the most to model visual hallucination. The lecture assumes a fair amount of mathematics, but it might be possible to take away some things form it.

In other words, simply because an object isn't "there" does not mean that the object isn't being intentionalized. in this case, it is a case of an inner-generated innner-intentionalized state.

The Chicken

Just to add a note:

How does intentionality work in cases where the mind "oscillates" between two possible intentional states? This is extremely important, in that such things as visual perception and humor both create oscillations between different intentional states in the brain. Thin of the Necker cube. Does the brain form an intentialization of one and then the other state or does it form a single intentionalization with two subcategories? Are there levels of intentionalization, such that the mind "locks" onto the meta-object which is "the "Necker Cube," since each state actually does not exist in reality? Is there a "smallest" intentional state?

In dealing with the logic of humor, I have to deal with situations where the law of non-contradiction "seems" not to hold or where a single antecedent can lead to two (seemingly) opposed consequents. This issue is interesting to me because it relates to how the brain processes humor, which is an area I study.

The Chicken

I think one does look back and forth, MC, forming different images ("image" being a broad category) and thinking how they seem not to fit together, while realizing it's an illusion. I have a sensory illusion sometimes in which it seems that my own hands have suddenly become huge and tiny at the same time. Very difficult to describe, but when I observe it carefully, it's clear that I'm really going rapidly back and forth between the "huge" feeling and the "tiny" feeling.

But are there two intentionalities or one?

The Chicken

Post a comment


Bold Italic Underline Quote

Note: In order to limit duplicate comments, please submit a comment only once. A comment may take a few minutes to appear beneath the article.

Although this site does not actively hold comments for moderation, some comments are automatically held by the blog system. For best results, limit the number of links (including links in your signature line to your own website) to under 3 per comment as all comments with a large number of links will be automatically held. If your comment is held for any reason, please be patient and an author or administrator will approve it. Do not resubmit the same comment as subsequent submissions of the same comment will be held as well.