Version 2.21, April 16, 2008
The following modules are devoted to contemporary epistemology. We shall be treating as contemporary the work in epistemology done in roughly the last century. Our examination of will be restricted to work in the Anglophone "analytic" tradition. As before, we will organize the discussion around the six projects outlined in the introductory module. Before the first of these projects is examined in the present module, a brief sketch of the development of contemporary epistemology will be given.
Interest in epistemological issues has been central to analytic philosophy since the early part of the twentieth century. The two founders of analytic philosophy in England, Bertrand Russell and G. E. Moore, both made important contributions to the study of knowledge. Russell followed the traditional empiricist path of Locke and Hume, engaging the question of how sensory experience can give rise to knowledge of an external world. Moore followed the path of "common sense" that had been blazed by Thomas Reid, who was critical of the skeptical tendencies inherent in empiricism.
Epistemologists from different philosophical movements tended to treat the study of knowledge in different ways. The "logical empiricists" took the practice of science to be the paradigm of knowledge and tried to show how all substantive knowledge is tied strictly to experience. One of the chief practitioners of "ordinary language" philosophy, J. L. Austin, held that we should look to the linguistic practice of making knowledge claims, with the surprising result that knowing looks more like promising than like believing. Pragmatists such as W. V. Quine maintained that belief in not only the theoretical claims of science, but also what the logical empiricists called "observation sentences" should be based on practical concerns.
In 1963, Edmund Gettier published a very short paper which stimulated a great deal of thought about the analytical project. For about fifteen or twenty years, much of the effort in epistemology was directed toward "solving the Gettier problem." Since then, epistemology has become somewhat fragmented. In this and the following modules, we will be looking at some of the leading trends in the study of knowledge.
This module is concerned with the way in which epistemology is and has been done. That is, it is concerned with method in epistemology. Before we ask how an investigation is or should be undertaken, we should first investigate what the goal of the investigation is. In the introductory module, a number of projects in epistemology were distinguished. Each project is carried out in its own way (or ways), though there are some methods which are common to more than one project. We will look at these projects in turn, and we will discuss some issues in methodology which bridge the various projects.
Method in the Linguistic Project
The primary goal of the linguistic project in epistemology is to determine the ways in which the word 'know' is used when it occurs in asserted sentences of natural language. For example, if I say 'I know that I have paid the mortgage' or, 'My wife knows that she has been given a new account at work,' I have made assertions using sentences of English. We will call such assertions "knowledge-attributions," and correspondingly denials of knowledge will be called "ignorance-attributions."
In the introductory module, we distinguished between the investigation of fact and the investigation of value, that is, the search for what is versus the search for what ought to be. This distinction applies to the classification of knowledge- and ignorance-attributions. Some investigators might be interested merely in finding patterns in the way people make these attributions. Such approach might also include finding rules which summarize the patterns of attribution that are discovered. For example, it may be a rule that people are more inclined to make knowledge attributions when they believe that the consequences of the attributee being wrong are insignificant. But all other things being equal, people are more inclined to attribute ignorance if the consequences of being wrong are significant. Another kind of rule might be that people are less inclined to attribute knowledge as they become aware of possibilities under which the attributee would be wrong. Attempts have been made to make rules of this kind more or less precise.
Another kind of investigation of fact is the attempt to describe the function of knowledge- and ignorance-attributions in the conduct of linguistic discourse. The primary example of this kind of project is in the context of the theory of "speech acts." As noted in the introductory module, J. L. Austin claimed that making an assertion about one's self is an act of giving one's word, much like promising.
While these investigations of fact are interesting in their own right, they do not go very far in dealing with the questions epistemologists have traditionally tried to answer. These are questions of value. Suppose that linguists discover rules that govern our inclinations to make attributions of knowledge or ignorance. A natural response would be to ask is whether the rules describe correct attributions. One answer might be that there is no further issue of correctness, or that following the rules by itself constitutes correct usage.
Someone who is not satisfied with this kind of answer might re-phrase the question. Suppose an attribution of knowledge is made. Is the asserted sentence true or false? It seems that 'I know that I have paid the mortgage' is a false attribution if I have not, in fact, paid the mortgage. More interestingly, it seems that the attribution is false if I have satisfied too low a standard of knowledge. For example, someone might conceded that I have knowledge even if my only evidence is a vague recollection that I paid the mortgage. If the attribution is false, then even if one has followed the established pattern of usage, it seems that there is a sense in which the attributor is done something wrong.
There are several responses to this injection of value into the study of linguistic usage. A radical approach is to say that one cannot go wrong in making a linguistically appropriate knowledge attribution, even if the proposition in question is false. A related, but less radical, approach is to say that if one's attribution follows the rule of attribution, then one's attribution is wrong only if the proposition is false. All that matters is that one has made the attribution in an appropriate way.
The approach taken by Stewart Cohen is semantical. He wants to allow that knowledge attribution can be true or false, and moreover that the truth or falsehood of an attribution depends to some extent on the meaning of the term 'know.' According to Cohen, the word 'know' or 'knows' means different things in different contexts. Just as the context of utterance determines the reference of 'I' or 'here,' the context of attribution determines the meaning of 'know.' On this view, it is possible for sentences of the form 'S knows that p' and 'S does not know that p' both to be true at once, so long as the asserters of the sentences find themselves in different contexts of attribution.
The alternative to all of these approaches is to claim that part of what makes an attribution of knowledge true or false is whether it meets some fixed or "invariant" standard of knowledge. On this "invariantist" view, there is a single fact of the matter about whether S knows that p, regardless of the conversational appropriateness of asserting that S knows that p. The task for the invariantist then becomes that of determining under what invariant standards knowledge assertions are true. This task is most naturally undertaken as part of the analytic project.
Method in the Analytic Project
The goal of the analytic project can be conceived in one of two ways. On one conception, there is a complex concept of knowledge that is to be broken down into simpler concepts. For example, Descartes analyzed knowledge as "certain and evident cognition" (Rules for the Direction of the Mind, Rule Two). Kant stated that knowledge is "assent that is sufficient both subjectively and objectively" (Critique of Pure Reason, Part II, Chapter II, Section III).
This kind of analysis was abandoned in the twentieth century in favor of analyses that begin with a schema of a sentence, 'S knows that p,' and then provides a set of conditions thought to be necessary and sufficient for S's knowing that p. Some philosophers, such as Frank Ramsey, C. I. Lewis, A. J. Ayer, and Roderick Chisholm had offered analyses in this format before 1963. But it was in that year when Gettier's famous paper criticized what he called the "traditional" analysis and opened the gates for myriad alternative analyses. Gettier's formulation of the "traditional" analysis is as follows:
S knows that p if and only if (i) p is true, (ii) S believes that p, (iii) S is justified in believing that p.
The method for carrying out the analysis goes something like this. One first proposes an analysis, perhaps giving reasons or examples supporting its correctness. The correctness of the proposal is tested by discovering whether or not there are cases (usually, but not necessarily, made-up cases) which satisfy all the conditions but are not cases of knowledge or which are cases of knowledge but do not satisfy the conditions. In the former case, the analysis is said to be too strong, and in the latter case, it is said to be too weak.
The question of what counts and what does not count as a case of knowledge raises Chisholm's version of the problem of the criterion, as described in the introductory modules. Chisholm raises a difficulty in settling on a starting point for epistemological investigation. The currently popular method is particularism, which takes as its data the "intuitions" people have about what cases count as cases of knowledge and what cases do not. Methodism would have us begin with a pre-conception of what knowledge is and judge cases against that pre-conception as a standard.
For most of the history of philosophy, there was an agreed-upon standard for knowledge, namely that of certainty. One way to understand certainty is as a guarantee of truth. If S knows that p, then S's warrant for believing that p is precludes S from being wrong about the truth of p. This view is often called "infallibilism." With certainty as the standard for knowledge, the chief questions in epistemology were about whether and how certainty can be attained.
With the abandonment of infallibilism in the twentieth century in favor of what C.S. Peirce called "fallibilism," a new question arose. Exactly how strong must warrant be to suffice as a condition for knowledge? One way to answer this question is to appeal to particular cases. If the condition rules out cases that are accepted as being knowledge, then it is too strong. If it allows cases that are accepted as not being knowledge, then it is too weak.
But even though particularism seems well-suited to fallibilist epistemology, there remains the problem raised by Chisholm, which was discussed briefly in the introductory module. In order to count particular cases as knowledge, we must have a pre-conception of what knowledge is. But if the pre-conception is to guide us in evaluating particular cases, then we are back to methodism. And with methodism, we have the original problem of how to determine the appropriate level of the strength of warrant.
The dilemma posed by Chisholm is a specific case of a more general problem that has come to be known as "the problem of the criterion." This problem was first stated by the ancient skeptics. Take any claim C about which there is disagreement. C might be the claim that a certain person has knowledge about a certain item. The disgreement can be settled only if there is some standard or criterion S by which it can be determined whether the claim is correct.
Now, the skeptics ask, is there agreement about the standard S? In many cases there will not be, since people who disagree about how to classify cases often do so because they bring different standards to the classification. So let us focus on those cases in which disagreement over C is based on disagreement over standard S.
At this point, the skeptic will ask about the prospects for solving the disagreement over S. It would seem that some new standard S* is required to solve the dispute over S. But now, how could the appeal to S* be backed up if there is disagreement over it? If you appeal to S, then you have used circular reasoning, which must be avoided. If you appeal to some new standard S**, then the problem can arise again. The skeptics thought that regarding many issues, there is no way to settle the dispute satisfactorily. In the present case, it may be that we can never reach agreement about what it is that we know or about what the standards for knowledge are.
In contemporary epistemology, there is not much disagreement about where to begin: particularism is the dominant methodology. Even so, it remains to be seen how the particularist method is to be carried out in practice. We are supposed to test standards of knowledge on the basis of how well they conform to what we know. But then how do we determine which cases are cases of knowledge and which cases are not? Generally the criterion invoked is conformity with "intuitions" regarding whether knowledge should be attributed in specific cases (real or hypothetical). Much of the literature in contemporary epistemology consists of the proposal of a standard and the construction of hypothetical counter-examples that show that the proposed standard is either too weak or too strong.
To show that the standard is too strong, the procedure is for cases to given in which the conditions for the proposed analysis are fulfilled, but one responds "intuitively" that the epistemic subject in question does not know. So, for example, contextualists describe cases which involve ordinary contexts of attribution and elicit the intuition that there is knowledge. Then they create extraordinary contexts of attribution by displaying skeptical hypotheses, which elicit the intuition that there is ignorance. Their goal thenceforth is to explain "how we fell into the puzzling conflict of intuitions in the first place" (Skepticism: A Contemporary Reader, ed. Keith DeRose and Ted Warfield, Introduction, Section 3).
Of course, there can be, and indeed is, disagreement over intuitions. If there is to be any hope for a single canonical analysis of knowledge, then we need to have a way of settling the differences. If the differences are going to be settled, there has to be a way of determining which intuitions are to be preferred. Here there is room for considerable further disagreement. One might prefer näive, untutored intuitions, or the intuitions of someone who has considerable philosophical sophistication. There is even more variation possible, as seen from Aristotle's list of what might be appealed to as "common beliefs."
The common beliefs are the things believed by everyone or by most people or by the wise (and among the wise by all or by most or by those most known and commonly recognized). Topics, Book I, Chapter 1)How are we to choose those intuitions which are to be the standard?
It seems that there is no empirical test that could be performed to determine whether someone is successful in tracking down cases of knowing. Given the amount of disagreement over what knowledge is, it does not seem that we can determine whether a given case is one of knowledge analogously to the way we judge whether two sums of numbers are equal.
It is unfashionable these days to claim that we have some kind of direct rational insight into the nature of knowledge (or any other alleged property). This is due in large part to the influence of Ludwig Wittgenstein, especially through his influential mid-twentieth-century book Philosophical Investigations. Wittgenstein regarded the quest for precise Platonic analyses (as he had undertaken himself in his earlier works) to be a kind of disorder for which he offered therapy.
Wittgenstein advocated, in place of the search for "the meaning," an investigation of "the use" of expressions that seem to denote something objective. In the terms that I have been using, this would involve a description of the way in which people make knowledge-attributions about themselves and others. Because of the variation in those contexts, there may be only a "family resemblance" (to use Wittgenstein's figure) between what are deemed items of knowledge. If this is the situation in epistemology, we can explain both why there are so many disputes over standards for knowledge and why these disputes have not been resolved.
The use of intuition in analysis can be defended using an idea of Nelson Goodman ("The New Riddle of Induction"). The proposal, made popular by John Rawls in A Theory of Justice, is known as the method of "reflective equilibrium."
In the case of analyses of knowledge, the procedure would be to formulate analyses of what knowledge is on the basis on our intuitions about the meaning of "knowledge" as such and about specific cases which we count as knowledge. Then we compare the consequences of the analysis for individual cases against our intuitions about whether one knows in those cases. Where the two do not match, we make adjustments. The process goes on back and forth between fresh analyses and new cases, until we have an adjusted analysis that conforms to our adjusted intuitions. Clearly the standard for settling on intuitions in this way is coherence.
Even if we allow that coherence among intuitions is the criterion for success in analysis, it seems that we are far from being able to take advantage of it. Reflective equilibrium has not been achieved in epistemology, except perhaps in the minds of individual epistemologists.
I must agree with the DeRose and Warfield that our intuitions about cases of knowledge are something to be explained, rather than being taken as basic data. Any explanation would look at intuitions in the context of the reasons knowledge attributions are made. And I think that the reasons for attribution are to some extent practical, in a way that influences the standards for attribution that we adopt. This topic will be discussed more fully in the next module.
As noted above, epistemologists nowadays have abandoned infalliabilism in favor of fallibilism, or what Descartes called "moral certainty" or what the ancient Academic skeptic Carneades called "plausibility" (or "probability"). The turn to fallibilism appears to be motivated in two related ways. First, it is a reaction to the difficulties raised by skeptics. The fallibilists think that if we lower the standard for knowing, we will be able to meet it. Second, our attributions of knowledge in ordinary language are generally not attributions of infallibility. The fallibilist epistemologist thus begins with an everyday conception of knowledge, not an idealized conception such as that of Descartes.
David Lewis points out in "Elusive Knowledge" that despite its promised advantages, the retreat to fallibilism should leave us uneasy.
So we know a lot; knowledge must be infallible; yet we have fallible knowledge or none (or next to none). We are caught between the rock of fallibilism and the whirlpool of skepticism. Both are mad!
Yet fallibilism is the less intrusive madness. It demands less frequent corrections of what we want to say. So, if forced to choose, I choose fallibilism. (And so say all of us.) We can get used to it, and some of have done so. No joy there—we know that people can get used to the most crazy philosophical sayings imaginable. If you are a contented fallibilist, I implore you to be honest, be naive, hear it afresh. "He knows, yet he has not eliminated all the possibilities of error." Even if you've numbed your ears, doesn't this overt, explicit fallibilism still sound wrong?
A further challenge raised by fallibilism to the analysis of knowledge can be found in a fact that we have already discussed. It seems that knowledge-attributions vary significantly from context to context. As Stewart Cohen asks in "How to Be a Fallibilist," which standard of knowledge embodies just the right amount of fallibility? His conclusion is that there is no single standard for the degree of fallibility permitted by any given analysis. It seems to follow that 'knowledge' has no fixed meaning that would allow for a unitary analysis and that there is no generally applicable way to distinguish cases of knowledge as such from cases of ignorance.
One of the main disagreements in the analysis of knowledge is between the "internalists" and the "externalists." A simple way of distinguishing the two positions is to say that externalism makes at least some attributions of knowledge on the basis of purely external considerations. For example, S was caused to believe that p in a way that involves the truth of p itself. Or S's belief that p was formed in a reliable way. For the internalist, some additional, "internal" factor is always required before knowledge can be attributed to a subject. Generally, this factor involves some "reason" S has to believe that p is true.
It is pretty easy to separate internalists from externalists using a simple test. Is one willing to attribute knowledge to animals and small children, which are both supposed to be lacking with respect to rationality? If one is willing to attribute knowledge to animals and small children (not to mention computers), one is most likely an externalist, and if one is not, one is most likely internalists.
The dispute often appears to be about whether animals and small children share with adult humans the property knowing that p. The fact that there is such deep disagreement about whether the attribution of knowledge is appropriate suggests that the the parties to the dispute are not talking about the same thing. Ernest Sosa has suggested that there are actually two kinds of knowledge, "animal knowledge" and "reflective knowledge." Of course, there remains the methodological question of how to decide between a unitary and a bifurcated approach to knowledge.
Method in the Normative Project
The goal of the normative project is to reveal the standards involved in knowledge. The methodological issues raised by the normative project depend on how it is connected with the linguistic and analytic projects.
As we have seen, one way in which the linguistic project may proceed is by trying to determine the conditions under which knowledge attributions are correctly made. The correctness of an attribution would be a consequence of the attribution's conforming to some standard or norm of correct attribution. The methodological problem would then be to justify the choice of some standard as being correct.
One could try to validate the norms in some way, or we could observe the cases in which people endorse or approve of attributions of knowledge and attempt to discover a pattern of approval, which would be codified in a rule.
The discovery of norms is part and parcel of the analytic project. We have seen that there are great differences between "internalist" and "externalist" norms. Within each camp, there remains a further task of refining the norms that make up the basic approach. For an internalist, the basic norm is something like "having a good reason" or "being justified." For an externalist, the basic norm might be "being formed reliably" or "being the product of a well-functioning faculty."
The normative project on the internalist side can draw on some pretty well-developed areas of investigation. It is commonly agreed that conformity to the norms of logic is a way in which a reason can be a good one. Deductive logic, probability theory, and statistics are mature disciplines. Much less developed are accounts of the norms governing good explanations, which many epistemologists think are in one way or another standards for knowledge. Accounts of "coherence" among propositions have not been fleshed out in any satisfying way. A recent development has been the study of special symbolic logics (such as non-monotonic logic) that seek to codify patterns of reasoning not captured by traditional approaches.
Even so, there is a question as to how to apply such norms as conditions of knowledge. In particular, there is much disagreement about the the role of the probability calculus in the description of standards of knowledge, especially in light of the "lottery paradox" to be discussed in the module on the normative project. As always, there is the further issue of how to choose between competing accounts of justification, and here appeal to intuitions about cases is generally made.
Some epistemologists, such as Roderick Chisholm, Keith Lehrer, and John Pollock, have developed special epistemic principles which are not based straightforwardly on logic. Lehrer developed his view as a way to evade the lottery paradox. Pollock looks to rules pertaining to artificial intelligence for epistemic norms. These non-standard approaches are faced with the same issue as the more traditional approaches: justifying the superiority of their norms against their many competitors.
For foundationalists, who believe that some beliefs are justified in isolation from all others, the analytical project is extended to the task of providing norms for this kind of justification. Once again, there are several competing approaches. Some hold that the foundational beliefs are "self-justified," others that they are justified by something other than beliefs, others that they are "prima facie" justified, that is, presumed to be justified unless there is reason that they not be.
Deciding which approach is better usually involves two factors. One is how each approach can handle problems peculiar to it. For example, those who claim that a belief can be self-justified must explain how they can deal with the charge of circular reasoning. The second factor is whether the results of applying these standards conform to the knowledge-attributions people actually make.
Externalist epistemic standards have been developed in many different ways. One approach is commonly used by reliabilists. Reliability of a belief-forming process is understood statistically, in terms of the ratio of true beliefs to false beliefs. As with all fallibilist theories, reliabilism faces the question of how high its standard, in this case the truth-ratio, must be.
Method in the Descriptive Project
The aim of the descriptive project is to discover how knowledge is produced. In the module on the descriptive project, we will consider accounts of a priori knowledge, "naturalized" epistemology, and the ways in which scientific knowledge is produced. Each account has its own methodological issues. Before turning to them, we will take a look at a larger methodological dispute, which concerns the extent to which epistemology can or should be done "from the armchair."
Externalists are inclined to hold that epistemology should not be done from the armchair. In modern epistemology of the seventeenth and eighteenth centuries, armchair epistemology sometimes came into conflict with what is generally believed about the real makeup of the knowing subject. The theory of "ideas" used to account for knowledge did not mesh well at all with the standard physical theory of perception.
In our present day, great strides are being made in the investigation of perception as well as of the operations of the brain relevant to cognition. Some epistemologists have attempted to incorporate this new information into their accounts of knowledge. Alvin Goldman's 1986 book Epistemology and Cognition was an early project of this kind. Yet most mainstream epistemologists have shied away from this project.
One purely external reason that more use is not made of empirical discoveries may be the sheer effort required to learn the sciences relevant to cognition. But there is a more compelling internal reason, namely, that epistemology is regarded by most as a normative discipline, and it is assumed that the investigation of norms can take place "in the armchair," though perhaps with reference to the intuitions of other people.
Even externalists seem to engage in armchair epistemology. Suppose the externalist claims that a necessary condition for S's knowing that p is that S has formed the belief that p in a reliable way. The main issue is not which cognitive systems actually produce reliable belief, but rather how reliable any system must be, how to characterize reliability, etc.
One argument to the conclusion that even a normative account of knowledge such as reliabilism must make reference to empirical data was related to me by Robert Cummins. He takes it as given that normative standards are useless unless they can be met. "Ought," he says, implies "can" (a point originally made by Hume).
Take the simple example of the Aristotelian argument for foundationalism. If demonstrative knowledge requires an infinite regress of demonstrations, then it is impossible because an infinite regress of demonstrations is impossible. Most of the constraints on epistemic norms seem to be based in the finitude of the human mind, the speed of the computations it makes, and its capacity to store data.
The argument is persuasive in the abstract. But most armchair epistemologists, like Aristotle, are sensitive to it and go out of their way to avoid any psychologically unrealistic demands in their normative systems. It may be that they are in error regarding what is feasible and what is infeasible for the human mind to do. But generally their descriptions of the conditions of knowledge are so abstract that it is hard to see how they could fail to apply to concrete situations. The demand that our belief-forming capacities be "reliable" is a case in point.
In the present work, we have been, and will be, doing our epistemology from the armchair. It would always be preferable if it could be done with more extensive information about the functioning of human systems. We can only hope that our assumptions (meager as they are) about the very general features of our cognitive systems are not unrealistic. If they prove to be so, then our reasoning about knowledge would have to be modified.
Special Methodological Issues in the Descriptive Project
There has been a renewed interest in a priori knowledge which has been spurred by Noam Chomsky's account of language-learning. Chomsky claimed that there are innate linguistic universals that facilitate the learning of language. Several contemporary epistemologists have developed accounts of a priori knowledge. Laurence BonJour's account relies heavily on what he calls "metaphysics of a pretty hard-core kind" (In Defense of Pure Reason, p. 181). The reliance on metaphysics to explain how a priori knowledge arises is unusual and calls for justification. One could claim that the use of metaphysics is justified because of its explanatory value (which is itself an appeal to an epistemic norm!). Or, one could take on the daunting task of trying to provide an independent justification of the metaphysical theory.
The subject of "naturalized epistemology" is quite varied, and each variation carries methodological questions with it. W. V. Quine has proposed that epistemology is a branch of psychology and thus should be studied empirically by psychologists. This gives rise to the question of how the psychological study of the production of knowledge should be undertaken. Quine himself was a behaviorist, who thought that the object of study should be the pattern of responses to stimuli. Others, such as Paul and Patricia Churchland, look to the way the brain functions as a model of how human learning takes place. John Pollock makes artificial intelligence his starting point. The "evolutionary epistemologists" try to understand human knowledge from the standpoint of how the cognitive systems that produce knowledge have developed in a process of evolution. Once again it must be asked whether or why any of such wildly different approaches is superior to the others.
Scientific knowledge is a special kind of human knowledge which is investigated in its own right. There are serious disputes about what scientific knowledge is and how it should be investigated. Here once again the distinction between fact and value comes into play. In the earlier part of the twentieth century, epistemologists tried to develop epistemic norms which prescribed how science ought to proceed. Later in the century, there has been a great shift to a more descriptive approach, which emphasizes scientific practice. This has led many philosophers of science to investigate in great detail the way scientists go about their business. One question that arises is what relevance these investigations have to knowledge, as opposed to being of merely sociological interest.
Method in the Validation Project
The validation project has at least two goals. One is to show that the standards of knowledge (on a given account of knowledge) are acceptable (by some standard of acceptability). The other is to show the extent to which a given set of standards allows potential knowers to have knowledge. Carrying out hese two tasks involves important methodological questions.
We have already noted that there is some question as to what are the "right" epistemic norms. The task of justifying our norms may involve observation of what norms are used in practice. But some would say that the rightness of the norms needs to be demonstrated in some other way. For example, one might wish for a demonstration that a given norm is one which is "truth-conducive." It would be good to show that the use of the norm leads its user toward the truth and away from falsehood.
A general problem, highlighted by David Hume, is that any attempt to validate a certain norm requires the use of norms. Then the question becomes which norms are to be used as the standard for validation. If the norms are the same as those to be validated, then it would seem that we argue in a circle and do not validate anything. If the norms are different, then it may be difficult or impossible to carry out the validation.
Hume's famous case is the attempt to validate the use of inductive reasoning. He argued that inductive norms cannot meet the highest standard of validation, that of being logically true. The alternative is to try to validate induction on the basis of experience. Then he showed how this process of validation must assume the validity of induction, which would make the validation circular.
How should we react to this difficulty? One approach would be to keep on trying to validate induction. Another would be to give up and become skeptics with respect to alleged knowledge based on induction. A third approach, typified by particularism, would be to claim that inductive reasoning does not need validation beyond the fact that we are willing to say that its use results in knowledge. It is not clear how it could be shown that any one of these approaches is better than any other.
Skepticism is the view that we are unable to validate our knowledge claims, or that our claims to know are simply false. Traditionally in epistemology, it has been thought important to dispense with skeptical claims in one way or another. In contemporary epistemology, the attempt to avoid skepticism has been a primary motivation for the adoption of various methods.
Skepticism is generally regarded as a problem primarily for internalists. The fundamental skeptical problem, raised by Descartes, is that our internal evidence seems to be insufficient for us to determine whether it points to the truth or is the result of some kind of deception. For example, am I able to tell from the content of my experience whether I am awake or asleep right now? If I am asleep, then I do not know that I am now composing this module.
One of the appeals of externalism is that it does not require that one be able to determine that one's evidence is not deceptive. If I am, for example, able to tell, reliably, that I am composing at my computer, that is all I require to know that I am doing just that.
Another way to be sure that the skeptical problem is avoided is to adopt the particularist strategy and take knowledge as a starting-point which is not in question. Indeed, the desire to head off the skeptical problem before it begins is primary motivation for the particularist "common sense" philosophy of Thomas Reid and G.E. Moore.
Fallibilism owes much of its appeal to the fact that it offers relief from a different kind of problem. The higher the standard of knowledge, the more difficult it is for knowledge to be attained. If the standard is at the maximum level of strength, as with infallibilism, it is quite difficult, if not impossible, to attain. A sufficiently low-standards fallibilism avoids this problem, and hence it is very appealing.
As has been seen, fallibilism has its own problem, which is that it seems difficult to believe that a single fallibilist standard of knowledge can capture all the cases in which we are willing to attribute knowledge. The contextualist responds by holding that the standards governing knowledge attribution vary with context. This allows the contextualist to account for ignorance-attribution under certain less-than-ordinary conditions. And this is taken to be an advantage of the contextualist account of knowledge-attribution.
Thus, four of the most popular contemporary approaches to how to do epistemology--externalism, particularism, fallibilism, and contextualism--are influenced to a great degree by the desire to avoid skeptical results. Those who think avoiding skepticism is a primary goal of epistemology appeal to our entrenched belief that we have the knowledge that we routinely attribute to ourselves. But those inclined toward internalism, methodism, infallibilism, and/or invariantism would ask why this entrenched belief should determine our approach to knowledge. Other intuitions can and do favor these views, as Lewis pointed out in the case of infallibilism. And there may be other reasons to favor approaches that threaten skepticism. How are we to decide which is best?
Rather than avoid the skeptical problem, some have tried to solve it, and a few even embrace skepticism. Attempts to refute skepticism have not earned general acceptance, but does this mean that we should give up trying just because we have not yet succeeded?
In this module, we have looked at some of the methodological questions that arise in each of the other projects of epistemology. A common theme is that the adoption of one method or another depends to a great extent on the goal of the investigation at issue. We have seen that there are many areas of disagreement about which method to use, and that in many cases it is difficult to see how the disagreement might be solved. As will be seen in the succeeding modules, there is not much reason to hold out hope for settling disputes about non-methodological issues in epistemology, either.
[ Previous Module | Next Module | Table of Contents | Assignment Page | Course Web Page ]
[ G. J. Mattey's Home Page | UC Davis Philosophy Department Home Page | UC Davis Home Page ]
This page and all subordinate pages copyright © G. J. Mattey, 2004, 2005. All rights reserved.