Wittgenstein’s point in that passage is that this can’t be right because in that case it is possible that our words could refer to radically different things and yet we would never know it. And the one of the most fundamental divides within this debate turns on the two views of mind that I have introduced above: the Cartesian and behaviorist views of mind. This is what Wittgenstein is getting at in the above passage when he says that in assuming that the meaning of “pain” is given by my subjective conscious experience, I “generalize the one case so irresponsibly.” Wittgenstein is making the same point that I have been making (in addition to others): that generalizing from my own conscious experiences to the conscious experiences of others is a hasty generalization. So we all call things by the same color—for example, we all refer to stop signs as “red” and grass as “green”—but what you in your head experience as red, I experience as green. The problem of other minds asks how one can support the commonsense belief in the existence of other minded individuals against the general denial of other minds. A good example of a behaviorist doing this is Wittgenstein in the “beetle in the box” passage cited earlier. But its insides will be like the insides of a rock—there’s nothing there. Rather, machines only behave as if they are thinking; they appear intelligent, but really aren’t. Do you see how this is a metaphorical way of raising the problem of other minds? The traditional epistemological problem of other minds is oftenassociated with scepticism. In all of these cases I am often aware of the conscious thoughts I am having and I recognize that my intelligent behavioral responses are regularly preceded by those conscious thoughts. The box is a metaphor for the mind—this is why we cannot see into others’ boxes; other people’s minds are “black boxes.” This is Wittgenstein’s colorful way of putting forward the Cartesian view of mind on which the problem of other minds depends. d. We can infer the cause of other people’s behavior. (If you don’t know, reread it and think about it before proceeding.) The problem is that Russell seems to be committing exactly this fallacy in his solution to the problem of other minds. The astute reader should recognize that this is the Cartesian view of mind. I should note that Russell himself argues something stronger than what I’ve presented here. Whether or not there is a solution to the problem of other minds continues to be something that philosophers debate. One might even imagine such a thing constantly changing.— But what if these people’s word ‘beetle’ had a use nonetheless?—If so, it would not be as the name of a thing. Of course, in one’s own case, one knows that one has thought because one can observes one’s own thoughts directly, it seems. You might think that we could easily clear up any difference in perception by asking each other questions such as, “Is this green?” But this isn’t what the original question is getting at. Like the inverted spectrum, the problem of other minds draws on these same two fundamental claims about minds: 1) that we can only have direct, unmediated access to our own minds and 2) that there can be radically different causes of the very same intelligent behaviors. It is clear in that passage that what Wittgenstein is attempting to understand is how we learn the meaning of a mental term like “pain.” The traditional idea is that we learn the meanings of terms like “pain” by singling out a kind of experience in our mind’s eye and then having that experience be the thing that our term “pain” refers to. We cannot rationally rule out that solipsism is true. Is it successful? That is a hasty generalization, par excellence. 2017/2018 Dissolutions of philosophical problems reject that the problem really is a problem. For example, if my friend Grace says, “that lilac bush smells wonderful” after smelling a lilac bush, then this is one of her intelligent behaviors that I can observe. What is the difference between concrete particular objects and abstract objects? Epistemology - Epistemology - The other-minds problem: Suppose a surgeon tells a patient who is about to undergo a knee operation that when he wakes up he will feel a sharp pain. But I have presented the weaker claim that M and B are correlated since the weaker claim is all that is needed to support the inference to other minds. Of course, no one actually believes this but the skeptic’s point is that you cannot rule out this seemingly absurd possibility. Now place the “D” atop a capital letter “J.” What object does this remind you of?” If the machine were able to make the human think that it was actually a human (and not a machine), then, Turing claims, we should consider that machine to be intelligent. If we were to define the mind as “the thing that causes intelligent behaviors” then would there still be a problem of other minds? That is, I alone have direct access to my own conscious experience; the conscious experiences of others can only be inferred through observations of others’ intelligent behaviors. True or false: The Turing Test was conceived as a test that would enable us to answer the question: “Can a machine think?”. Thus, I prefer to refer to the behaviorist solution as a dissolution of the problem, since it rejects the terms in which the problem is presented. We cannot know that the words people speak have any thoughts/sensations tied to them. For behaviorists, the mind is nothing other than the way we refer to the mind using certain kinds of mental terms, such as pain, hope, fear, intelligence, and so on. The behaviorist dissolution to other minds skepticism does this by rejecting the Cartesian view of mind on which other minds skepticism depends. In view of this, it might reasonably be asked why the problem of solipsism should receive any philosophical attention. It will answer questions like the above “D”-umbrella question correctly and it will be able to describe the fragrance of a rose and distinguish that smell from the smell of coffee. But since these intelligent behaviors could be caused by things that aren’t conscious, I cannot confidently infer that others have conscious experience. The mind is not best conceived as a private domain that only the subject has access to, as the Cartesian view of mind holds. In contrast, Russell’s attempted analogical inference solution to other minds skepticism accepts the Cartesian view of mind on which other minds skepticism depends. Defining the mind in this way enables us to verify the existence of minds in a third-person type of way—that is, I can know that you have a mind because I can know that you engage in intelligent behaviors (such as conversing with me). So, for many of the intelligent behaviors I observe (namely, my own), I can observe that they are preceded by conscious mental states. And if we actually think of how the generalization works, it is a bad generalization—it is a hasty generalization. A general denial of other minds requires an individual to wholeheartedly believe they are the only minded individual that exists and all others are simple automatons. The idea is that humans have minds (and thus thoughts) but that machines don’t have minds (and thus don’t really think). But how do I know that something like this goes on in other people? Suppose I have 100 friends, all of whom I believe have minds like mine and all of whose intelligent behaviors I have observed. People might be talking intelligently about their beetles/thoughts even if there really aren’t any beetles/thoughts. Therefore, based on the similarity of (B) in me and (B) in others, (B) in others are probably regularly preceded by conscious mental states (M). Developing this second claim will lead us to the problem of other minds. For example, if someone is crying and I ask why and they say that their grandfather just passed away, can’t I rationally conclusion that they are sad? Chapter 5: The problem of free will and determinism, Free will supplement: Quantum indeterminacy and the Libet experiments, Free will supplement: Libertarianism and dualism, Pragmatic arguments and the ethics of belief, Chapter 11: Capitalism, Communism, & Bernie Sanders, Have you ever wondered whether another person might see colors in a radically different way from you—or perhaps that you see colors in a radically different way from everyone else in the world? It seems that either he committing a hasty generalization (if he is generalizing from one’s own intelligent behaviors to the intelligent behaviors of everyone else) or he is begging the question (if he is assuming that all intelligent behaviors—my own and others—are similar in all respects). April 12, 2013. Creative Commons Attribution 4.0 International License. I have added in bold text and square brackets the explicit assumption that Russell needs in order for his analogical argument to work. See if you can understand what Wittgenstein is saying in the following passage. Wittgenstein uses a metaphor of a “beetle in a box.” What are the beetle and the box metaphors for?