Sunday, September 1, 2013

Week 4 lectures of Introduction to Mathematical Philosophy are about conditionals (if-then) sentences. A lot of the material was pretty technical and I had a difficult time following it step by step. Consequently, I don't think I am in a position to offer a summary of it as in previous posts, but I'll just talk briefly about my broad understanding of the main conclusions.

Conditionals are sentences of the form 'If A, then B'. They are of two kinds: indicative conditionals and subjunctive conditionals. A subjunctive conditional (also called counterfactual) indicates what would be the case if A were true (although it is not true). For example, 'If Oswald had not killed Kennedy, someone else would have'. An indicative conditional indicates what is the case if A is (in fact) true (which it may or may not be). For example, 'If Oswald did not kill Kennedy, someone else did'.

What I understand is that there are two ways of making sense of indicative conditionals. Thesis 1 says that indicative conditionals express propositions and that their degrees of acceptability are given by the degrees of belief in these propositions. These propositions will be expressed in mathematical form as sets of possible worlds, as discussed in prior lecture. Thesis 1 is represented as X -> Y.

Thesis 2 says that the degree of acceptability of an indicative conditional is given by the corresponding conditional probability, i.e. you assume that A is true, then you analyse what effect it has on the probability of B. "If the moon is made of cheese, then the moon is edible." If you accept that the moon is made of cheese, then it is quite probable that it is edible. The conditional is true, even though by itself, the separate probabilities of moon being made of cheese and moon being edible are low enough to be zero. Thesis 2 is represented as P(Y|X) (The probability of Y given X.) Thesis 2 was suggested by Ramsey.

For a lot of philosophers, it seemed natural that the acceptability of an indicative conditional from thesis 1 and thesis 2 would be equal to each other. This is called Stalnaker's Hypothesis, and expressed as

P(X->Y) = P(Y|X)

In the words of Lewis: probabilities of conditionals [P(X->Y)] are conditional probabilities [P(Y|X)].

David Lewis, however, famously showed in his Triviality Theorem that if Stalnaker's Hypothesis is accepted, it leads to an absurd result. Through a series of complicated steps, which I won't replicate, Lewis demonstrated using the hypothesis that

P(X->Y) = P(Y)

The probability of X->Y = the probability of Y. This is obviously absurd. (The probability of "If the moon is made of cheese, then the moon is edible." is obviously not equal to "Moon is edible". It's not a contradiction, but nonetheless absurd.) Lewis' Triviality Theorem, therefore, showed that Stalnaker's Hypothesis is untenable.

There have been many philosophical responses to the triviality theorem. One is obviously to suggest that Stalnaker's Hypothesis is false, and that thesis 1 and thesis 2 are not equal to each other, and that the two degrees of acceptability for a conditional are different from each other, though it may be difficult to explain why so. Another interesting philosophical response is the Suppositional Theory of conditionals which suggests that indicative conditionals do not express propositions, and hence they are neither true nor false.

0 comments:

 

Copyright 2013 A Myth in Creation.

Theme by WordpressCenter.com.
Blogger Template by Beta Templates.