In my previous post, I mentioned that I was studying online courses in philosophy and that I was planning on writing summaries for each weekly unit to consolidate my understanding.
Well, that hasn't worked out as planned, has it?
After publishing the last post, I started working on the summary of the following unit but wasn't convinced by my writing. I then did what I should have avoided at any cost. I extended the summary to the next unit, and the next, and the next, etc., until it blew out of proportion.
So this post is the summary of 4 weeks (units) of study and may consequently be hard to digest. I apologize in advance.
We seem to have an intuitive notion of what knowing something means. I know my birthday, but I don't know which day of the week it was. I don't know the weather in Russia, but I assume it is cold.
We also seem to distinguish between knowing, assuming, or guessing, but we would probably have difficulty putting into words how we make that distinction. A few examples may help illustrate.
Let's take the following proposition: "Males account for 50% of the population". I don't know for a fact that it is true, but I assume it is.
What's the difference? Well, I haven't verified its factuality. I must have read about it somewhere, and it intuitively makes sense, so I assume it is true.
OK, but how about something I do know? Well, I know that Paris is the capital of France. How do I know that? Good question indeed. I don't remember precisely when or where I first learned it, but there is no shadow of a doubt in my mind that it is true. My parents told me so; I learned it in school; I also remember learning that the capital of France changed a couple of times through history, Paris being the latest.
But now that I think of it, I have never actively verified its veracity, and I'm not even sure how I would go about doing so.
The difference between assuming and knowing seems subtle, but as much as I would be willing to bet my life savings on the second proposition (Paris being the capital of France), I certainly wouldn't do so with the first one.
There must be some kind of demarcation in my mind.
Finding out what constitutes such demarcation is a raison-d'etre of epistemology, the theory of knowledge. Essentially, epistemology tries to answer questions such as "can we define knowledge?" and, more importantly, "is there a reliable way for us to tell apart knowledge from mere beliefs?"
In my previous post, I introduced a widely used definition of propositional knowledge as justified true belief (JTB). We can write it in logical terms as below:
S knows that p iff (if and only if)
- p is true;
- S believes that p;
- S is justified in believing that p.
where S is a subject (someone) and p is a proposition such as "there is a book on the table in front of me" or "Paris is the capital of France."
Note that propositions are statements that are either true or false.
An attentive and philosophically minded reader would probably ponder, "Hey, wait a minute. How do we even know that something is true?"
While the true condition is in itself a fascinating subject, "truth" is of ontological nature and belongs to the realm of metaphysics, i.e., the study of reality, existence, and the world (perfect timing to plug my essay about reality).
In epistemology, we are interested in the justification part. So from now on, we will assume that true beliefs are, well, true!
What does it mean to be justified? As you can imagine, this is not a trivial question.
A reasonable interpretation of "being justified" in a belief is "having reasons" for that belief. I know there is a book on the table because I can see it. I know the time because I just looked at the clock in the kitchen.
Now we seem to have some form of intuition regarding reasons for a belief.
Intuitively speaking, "I know that it is midday because the sun is bright" strikes as a better justification than "I know that it is midday because I rolled a dice and got an odd number, which means midday."
Some form of rationality is at play here, which will be the center point of this discussion.
Before we go any further, I should mention that JTB (justified true belief) is not always sufficient to qualify as knowledge. Let me illustrate with an example.
I know what time it is because I just looked at my clock in the living room, and I know that my clock is reliable because every time I have compared the time it gives me with another source of time such as my smartphone, the times were identical. So I feel justified in my belief of knowing the time after looking at my clock.
Now, let's imagine for a minute that unbeknownst to me, the moment I looked at my clock, it had stopped exactly 24 hours earlier. The time I read was correct, and therefore I formed a true belief. I also feel justified in my belief because my clock has always been reliable. But as a matter of fact, it can't have been reliable in this instance because it stopped working 24 hours ago.
Am I still justified in my belief?
The example above is called a Gettier case, in homage to the philosopher Edmund Gettier. Gettier cases are situations where a seemingly justified true belief is formed by chance, therefore challenging the notion of justification, and by extension, JTB.
We are presented with the still-unsolved conundrum: either justified true belief is not a sufficient condition for knowledge, or there is more to justification than having reasons for our beliefs.
Alright, let's get back to our central topic. We saw that having reasons for a belief is not sufficient. We need to have "good" reasons. Another way to put it is that our beliefs need to be formed rationally.
Now, there is no such thing as absolute rationality. Rationality is usually goal-oriented.
Because I don't want to be more anxious than I already am, it is rational to tell myself that I don't need to worry about my grossly unprepared presentation tomorrow and that everything will be alright. It is clearly not true, but it is nonetheless a rational course of action.
In epistemology, however, getting to the truth is paramount, and so we are interested in rationality with a specific goal: an epistemic goal. Intuitively, our epistemic goal could be something along the lines of "to believe propositions that are true, and not believe propositions that are false."
Knowing that we are finite beings with a finite amount of memory and time to spend on earth, would it be rational to memorize all the names and phone numbers from a phonebook?
Probably not. So simply maximizing the number of true beliefs doesn't seem to be a wise strategy.
Conversely, not learning anything is the best way to minimize the number of false beliefs, but again, that would be very counterproductive.
We should probably update our epistemic goal as follows: maximizing non-trivial true beliefs.
Now concretely speaking, how can we demonstrate that we are being epistemically rational in forming our true beliefs? Are there any necessary conditions on which to base our judgment?
There are two opposing views on this topic, internalism, and externalism.
Internalism vs Externalism
Before moving forward, let me introduce a very handy adjective: doxastic. Doxastic means something that relates to an individual's belief. A common use is doxastic attitude, i.e., the attitude toward a belief.
The doxastic attitude of an agent A toward a belief about a proposition p can either be:
- A believes that p
- A does not believe that p
- A suspends judgment regarding p
Internalism versus externalism is a discussion about what is regarded as necessary to justify a doxastic attitude toward a belief.
Broadly speaking, internalism is the view that justification has to be based on things that are internal to us and which can be within reach of our awareness. Externalism simply denies that view and stipulates that there may be instances of knowledge that do not need to be justified internally.
There are three main formulations of internalism; accessibility internalism, mentalism, and the deontological conception of justification.
Accessibility internalism is probably the most intuitive formulation. We are justified in holding a belief only if we have mental access to a justifier (evidence) or could become aware of such a justifier upon reflection. Here, a justifier is usually another justified belief, which could also be inferred by an observation, testimony, or experience.
I know the time because I looked at the clock, and I am aware of the fact that looking at the clock is what justifies my belief. I know that there is a new restaurant opening in town, and upon reflection, I remember having seen an ad on TV. Both are examples of accessibility internalism.
This view has a few issues.
- It seems too restrictive: intuitively, there are instances when one knows something but wouldn't have access to a justifier, even upon reflection. I know my birthday, but I have no recollection of how I came to know it.
- The problem of inference: when I'm looking at my clock to read the time, I am inferring a belief from assumptions that may not be trivial; for example that I can trust my eyes; that it is indeed a clock that I'm looking at; that I know how to read the time from a clock. etc.
- Agrippa's trilemma: we are trying to justify a true belief with another true belief, but what justifies that second belief? We are faced with Agrippa's trilemma to answer that question.
Mentalism is less intuitive but tries to offer a broader necessary condition for justification by conceding that justifiers are not limited to other true beliefs but could also be mental states in order to account for sensory states, which in themselves wouldn't be considered as beliefs without the use of inference.
For example, I know it is warm today because I'm feeling the warmth. I'm in a mental state of "feeling warmth," which becomes my justifier.
A way to defend this view is to imagine a scenario where Steve and Philip are in a living room with AC. Steve decides to go outside and experience the weather, while Philip stays inside and learns from the weather forecast that today is a hot day. In this example, we would intuitively grant a higher degree of justification to Steve because of his first-hand experience of the heat, meaning that sensorial states have to account for something.
Of course, any sensorial state is susceptible to the illusion problem; we can easily devise an experiment regarding heat perception. Prepare three bowls, one filled with cold water, one with warm water, and one with water at room temperature. Put your left hand in the cold water bowl and your right hand in the warm water bowl for a minute, and then put both hands in the room temperature water bowl. Your hands should be perceiving very different sensations from the same bowl, one of warmth and one of cold.
In the example with Steve and Philip, if instead of a cool room with the AC on, Steve went outside from a sauna, he may have had a very different experience; from his sensorial state (namely, heat perception), would he be justified to believe that "it's not that hot outside today"?
Deontological conception of justification
Similarly to the deontological concept of morality where something is moral if it follows certain principles (such as you shall not kill), a deontological conception of epistemic justification is guided by intellectual principles or values that one must live up to to be justified in one's belief.
Examples of principle could be "always double-check facts on Wikipedia", or "read at least two newspapers with opposite political views", or "never use leverage with Forex" (note to self).
This view also has a few issues.
- The most obvious one is that deontological principles are subjective and will differ from person to person. How can I be sure that my principles are "good" enough or not mutually contradictory? What if John and Sylvia acquire the same true belief by following two contradictory principles? Can they both be justified?
- Another important issue would be that young children, who have not yet developed critical-thinking and intellectual principles, would be denied justification for their beliefs, and therefore knowledge. That seems a little bit harsh and intuitively wrong.
So far, the three formulations of epistemic internalism above don't seem to offer a picture that fully reflects our intuitive conception of knowledge. Either internalism cannot account for all cases of belief justification, or JTB doesn't encompass all instances of knowledge.
Fully embracing internalism means giving up on JTB as the definition of knowledge.
In the abstract, externalism denies internalism, but this negative definition is not very practical; even if we accept that knowledge can be justified without having access to an internal justification, it doesn't tell us how.
We are interested in externalist positive accounts of knowledge acquisition.
One such account is reliabilism. Instead of focusing on the justification of the doxastic attitude toward a belief, reliabilism emphasizes the method by which such belief was formed.
A method is said to be reliable if it leads to the truth more often than not.
For example, looking at my watch seems to be a more reliable method to know the time than looking at the sun's position in the sky.
Below are a few examples of reliable and non-reliable means to acquire knowledge.
- non-reliable: asking in the street to a random passerby
- reliable: watch three different sources (web, phone, television) of time that are synchronized with an atomic clock and average out
Regarding the capital of France:
- non-reliable: ask a Parisien if Paris is the capital of France
- reliable: take French, English, and German history books and cross-reference historical cues of when Paris has become the capital of France and whether it has changed over time
At first view, it seems that reliabilism could be a strong account of knowledge that would not be susceptible to Gettier cases; in the example of the clock earlier, I wouldn't be justified for my belief because the means of acquisition (namely reading a broken clock) is not reliable. Therefore I can't possibly know the time.
Case closed! Or is it?
Problem with reliabilism
Let's modify a little bit the clock example.
Let's imagine that unbeknownst to me, the clock has been broken for 24 hours. But now, let's also imagine that without me noticing, a friend has been resetting the clock so that every time I would look at it, it would be displaying the right time (yes, some people have way too much time on their end).
In this case, looking at the clock to know the time is reliable because it gives the correct time every time (thank you, friend!).
What is now problematic is that the trueness of my belief is not a result of the acquisition process (i.e., looking at the clock) but depends on an external cause, namely, my friend. In other words, I am lucky that it is true, and intuitively speaking, we wouldn't confer me the status of knowledge regarding time.
So process reliability is not sufficient. We need some form of connection between a belief's acquisition method and its trueness.
Who could best serve that purpose than the belief-acquiring agent herself?
We all know people who always seem to get things right, to get to the truth in any situation. They are usually intelligent, conscientious, and have strong attention to detail. They seem to possess certain intellectual virtues.
An epistemic virtue is a trait or aptitude that allows an agent to be more likely to acquire true beliefs than someone without that trait. Virtue epistemology is an externalist view that is interested in the agent's epistemic virtues rather than the reliability of the acquisition process.
The main problem in the modified clock example earlier is not that true beliefs cannot be acquired. The person playing a trick on us is making sure that the time will be correct any time we look at the clock. We are therefore forming true beliefs about time by looking at the clock.
The issue, however, is that we justify our beliefs by assuming the reliability of the clock itself as a source of truth, which in our example is not the case.
By shifting our focus from the source to the agent, we can evade the problem. We can assume that an agent with epistemic virtues such as conscientiousness and attention to detail would have noticed inconsistencies in time readings or would have probably double-checked the time with a different source (such as a smartphone).
But while we have evaded our Gettier cases, we have created another problem regarding justification. Without a significant amount of inductive inference, we can't justify that an epistemically virtuous agent will "always" get to the truth. Therefore, possessing epistemic virtues cannot be a sufficient condition for knowledge on its own. There is still something missing in the equation.
This article has become much longer than I originally planned. I tried to briefly introduce the different views and positions about epistemic rationality, but my limited understanding of the topic didn't allow me to do so in clear and concise prose.
I hope, nonetheless, that the reader can now appreciate how deep and fascinating the problem of justification is and why it has captivated the minds of so many philosophers.
We want to believe that we are rational beings. It is comforting to claim that epistemic rationality is the foundation of our justifications, even if we don't have much evidence to support that claim.
The whole debate of internalism versus externalism is not only epistemic but also moral in nature. There is a question of moral responsibility. Can I be responsible for my own knowledge? Can knowledge exist inside one person? Or is knowledge fundamentally social in nature?
Epistemology is still a work in progress. We have yet to find a model that accurately represents our intuitive conception of knowledge, with precise conditions that are all necessary and jointly sufficient, and we may never do. But the journey has been intellectually rewarding so far.
Speaking of reward, as a token of appreciation to the courageous and patient readers who made it to the conclusion, let me share a great video from carneades.org that summarizes the different epistemic positions regarding knowledge.
(I should have probably started with it, but then you wouldn't have read the rest of the post!)
Carneades explains the different views by using the following trilemma (three statements that cannot all be correct)
(1) I don't know that I'm not being systematically deceived
(2) To have knowledge, I must know that I am not systematically deceived
(3) I know there is a hand in front of my face
Internalism rejects (1) and accepts (2) and (3); we can overcome deception from inside, by having access to internal justifications.
Externalism rejects (2) and accepts (1) and (3); we don't need to know why we know something. What matters is that knowledge is possible.
And as a bonus (outside of the scope of this article), Skepticism rejects (3) and accepts (1) and (2); or to paraphrase Socrates, all I know is that I know nothing.
- The Analysis of Knowledge (SEP)
- What is this thing called Knowledge, Pritchard
- Internalist vs. Externalist Conceptions of Epistemic Justification (SEP)