Tuesday, December 4, 2007
1) Every metaphysically modal statement is logically equivalent to a counterfactual statement
2) For any two statements, if one is logically equivalent to another then when one is in an epistemic position to know the first, one is necessarily in an epistemic position to know the second
3) (1&2) -> 4
4) If we're in an epistemic position to know counter-factual claims, we're in an epistemic position to know metaphysically modal claims
Chris denies (2). After some thought, I think this is obviously right. For instance, consider a tautology too complex for the human mind to grasp. Then take ~(P&~P). The latter is equivalent to the former, yet we're in no epistemic position to know the former. Alternately, consider someone who's convinced by McGee's argument that modus ponens is invalid. Suppose further this person has (given his evidence) every reason to believe McGee and no reason not to. Arguably, this person could know P&(P -> Q), and have excellent independant evidence for ~Q. Not only would he be in no epistemic position to know Q, he'd be in a very good epistemic position to justifiably doubt Q.
Williamson is a smart guy, and it's hard to believe that he simply missed this. So let's see what he says:
"Given that quivalences (17) and (18) are logically true, metaphysically modal thinking is logically equivalent to a special case of counterfactual thinking, and the epistemology of the former is tantamount to a special case of the epistemology of the latter. Whoever has what it takes to understand the counterfactual conditional and the elementary logical auxiliaries ~ and (contradiction) has what it takes to understand the possibility and necessity operators"
He goes on to consider something like Chris's objection a bit later:
"Indeed, we have no sufficient reason to regard any of the equivalences as strict synonymies. That detracts little from their philosophical significance, for failure of strict synonymy does not imply failure of logical equivalence. The main philosophical concerns about possibility and necessity apply equally to anything logically equivalent to possibility or necessity. A non-modal analogy: ~A is logically equivalent to A->(contradiction), but presumably they are not stictly synonymous; nevertheless, once we have established that a creature can handle -> and (contradiction), we have established that it can handle something logically equivalent to negation, which answers the most interesting questions about its ability to handle negation"
I'm not sure exactly what to make of this, but it seems like Williamson is talking about a specific sort of logical equivalence. Not logical equivalence between statements, rather equivalence between logical operations. A new argument may be set up as follows:
1) If one is in a cognitive position to fluently use logical operator P, and applying P to a formula(s) is logically equivalent to applying Q to a formula(s), then one is in a cognitive position to fluently use logical operator Q.
2)  is logically equivalent to C, where CA is defined as (~A -> contradiction).
3) We're in a cognitive position to fluently use C
4) (1&2&3) -> (5)
5) We're in a cognitive position to fluently use 
I see this as no better off than the original argument. For instance, C' could be defined as (~A-> (negation of incredibly complex tautology)), and an analogous argument would say we're in a cognitive position to use C'. However, we're not in a cognitive position to use C'.
One could respond that we ARE in a cognitive position to use C', we're just not aware of the fact. This seems weak.
Furthermore, even if this revised argument is sound, it misses the point. We want to know how we know metaphysically modal truths, not how we've come into a position to use them (or something logically equivalent to them). If I want to know how you found out that all renates are alive, I won't be satisfied with the answer "I found out that all cordates are alive". More is needed.
I found this all quite vexing. any thoughts anyone?
Monday, December 3, 2007
The thought experiment with water and twater tries to convince that our beliefs and desires are not just a matter of what is going on in our minds, if I am interpreting it correct. When I change the example it seems to be more clear, for instance a case between Mary and Suzy. When Mary talks or thinks about her mother, or what the concept of mother means, she refers to someone that gave birth to her and is biologically similar to her. Suzy was adopted, (this is something Mary is not aware of) and she uses the word mother to refer to her adoptive mother. So when Mary talks to Suzy about her adoptive mother she is actually referring to her biological mother. Mary and Suzy both use the same word; mother, but the way in which they use it has different extensions. Maybe this is not a good example because the word mother has many different extensions when someone different uses it, but we can think of it in another way, where there is the existence of a twin earth. On twin earth twin- Mary is exactly the same as earthian- Mary along with everything else except the fact that mothers on twin earth are not biologically similar to their children. If Mary was unknowingly switched in her sleep with twin Mary, she would be using the word mother to refer to someone different then she believes. I’m not exactly sure of why these examples show that the contents of beliefs supervene on our internal states. This seems like it is not a stretch to think that your surroundings and what goes on inside you indicate what beliefs you hold. The examples show then that beliefs that seem to be the same belief- type are actually not because they have different content. So, coming from identical people the beliefs are not the same, so there has to be something other than internally that reacts on belief, but can’t we just say that the fact that we think the beliefs are not the same shows that you and twin-you are not exactly the same and therefore it should be expected that you and twin-you’s beliefs wouldn’t be exactly the same? What about two different people placed in the exact same situations and surrounding acting and doing the exact same things and having the exact same internal goings on but in two parallel worlds or universes, should each and every one of their thoughts be the same? This example seems to me to also show that beliefs are caused by internal and external components. The examples just seem to give you what would already seem intuitive.
Sunday, December 2, 2007
Consider the sentences:
(i) If Jones had taken arsenic, he would have shown just exactly those symptoms which he does in fact show.
(iH) If Jones takes arsenic, he [normally] shows just exactly those symptoms which he does in fact show.
(1) If a situation S can be produced in which (i) and (iH) differ in truth value, then (iH) cannot do the same work (i) does in all situations.
(2) Let S be the case in which Jones’ symptoms are not those he would normally show in arsenic poisoning but those he would show given that he had, unusually, been fasting for the previous 72 hours.
(3) S is a case in which (i) is true when (iH) is false, that is a case in which they differ in truth value.
(4) Thus, (iH) cannot do the same work (i) does in all situations.
Now I don’t know anything about habituals and a fortiori I lack knowledge about habitual semantics. However, I would imagine them to be something like: the habitual conditional A [h]->B is true iff in A circumstances, B normally comes about. This seems to line up with Williamson’s evaluation of (iH) as false. The problem I have is with TWs evaluation of (i) as true under S.
It seems to me that if Jones had taken arsenic he would *not* have shown just exactly those symptoms which he does in fact show under S, since the symptoms in S are *not* those he would normally show in arsenic poisoning but those he would show given that he had, unusually, been fasting for the previous 72 hours.
Even spelling this out in terms of possible world semantics seems to point to the falsity of (i). A non-insane principle is the following: worlds where “normal” situations occur are closer to the actual world than worlds in which “non-normal” situations occur. But given this principle and S, if we go to the closest world where Jones takes arsenic, it is not the case that he exhibits the symptoms he currently exhibits at @ (since the @-symptoms are not arsenic symptoms). Hence, (i) is false.
Williamson is super good at logic, so I’m pretty sure I’m the one in error but I do not currently see how his counterexample works.
NECESSITY states that if it's necessary that A implies B, then if I were the case then B would be the case. One could hold that the principle fails when A is an impossibility. If A is an impossibility, then trivially in every possible world at which A holds, B also holds (there are no such worlds). However, even so it might still be intelligable (and correct) to say that were A to hold, B would not hold.
Williamson makes a case against this when considering a mathematician doing a reductio argument. His main strategy is to trade off truth for assertibility in such cases. Suppose you're trying to prove fermat's last theorum by using an impossible solution (S) to it to generate absurdity. You can use counter-factual reasoning to assert S -> A, but can't assert S ->~S for the purposes of your proposed proof. He says this does not tell against the truth of S->~S. After all, a contradiction could be used to prove anything eventually, even if it will take more steps.
My concern is that strategies like this won't work for some logics. Suppose para-consistent logics are necessarily not correct. It still seems right to say that if they were correct then certain things would hold, and if they were correct certain other things wouldn't. After all, they're specifically designed to be able to handle such cases.
Sunday, November 25, 2007
This brings to the fore all kinds of questions about how reference attaches to word meaning and use. As far as I remember, the historical-causal story of how reference fixing works is something like the following: the reference of any name ‘blah’ is fixed by an original user in a “baptism ceremony” which consist of the original user either ostensively or descriptively picking out the object blah and saying ‘blah’. The meaning (including reference) of ‘blah’ is then transmitted to new users of ‘blah’ when they intend to use ‘blah’ as the original user did. In this way, a long historical-causal chain of ‘blah’-use explains the transmission of the meaning along with the reference of the word.
So far so good.
Now let us turn our attention to poor tharthritis having Fred. Presumably, Fred intends to use ‘arthritis’ in the same way that his original community does. Yet, he also presumably knows that people outside of his community also use ‘arthritis’ (since otherwise he would not use ‘arthritis’ when complaining to the doctor); and no doubt also intends to use ‘arthritis’ in the way that the doctor does when he talks to the doctor. But now he intends to use ‘arthritis’ as both his community and the doctor do. On the historical-causal view it is unclear what the word ‘arthritis’ means. Is the extension of his use of ‘arthritis’, arthritis? Is it tharthritis? Is it both!?
One way to disambiguate would be to claim that the meaning of a word W in the mouth of a user X is always the meaning that W first had in X’s mouth unless X consciously corrects his meaning of the word. But if this holds, then it looks like Jones’ use of ‘water’ does not eventually come to mean ‘twater’ as Kim claims.
Another way to disambiguate would be to hold that the meaning of a word W in the mouth of a user X at time t is the meaning that X intends W to have at t (spelled out in an appropriate historical-causal way). But then it appears that Fred’s use of ‘arthritis’ might end up referring to both arthritis and thartritis. If this is true it would certainly be a startling discovery (at least for me). Most semantic theories that I know of only allow words in literal sentences to have single referents.
Of course there might be other ways to disambiguate the meaning of W. Perhaps the meaning of a word W in the mouth of a user X at time t is the meaning that the community C adorns W with. Where C is the community that X spends most of his time in. However, this account has the exact same problem as the second one whenever X has spent *equal* amounts of time in several communities. In any case, it has been interesting reviewing the thought experiments of Putman and Burge.
- If it is not the case that our intentional states supervene our internal states, then wide content is the correct account of mental goings on.
- If our intentional states supervene our internal states, then our intentional states wholly and correctly reflect our internal states.
- If our intentional states wholly and correctly reflect our internal states, then the meanings of words we use to express these states are correct.
- It is not the case that the meanings of words we use to express these states are correct.
- So, it is false that our intentional states wholly and correctly reflect our internal states.
- Thus, it is not the case that our intentional states supervene our internal states.
- Therefore, wide content is the correct account of mental goings on.
Premise (1) is definitional as far as wide content goes, so I'll leave it alone.
Premises (5), (6), and (7) are inferential from the other premises, so I'll leave them alone.
Premise (4) is justified through the lovely thought experiments created by Putnam and Burge that show instances where the words we use to express, that is the understanding of the meanings of our expressions, are incomplete or possibly (in the case of the Burge example) defective. But we use them anyhow, even when we know this may be the case. I'll leave this premise alone, because I think we do this. A lot.
Premise (2) and (3) are the tricky ones. (2) I'll agree with because if what it takes (that is the definition of supervene) for one state to supervene on another state is to wholly and correctly reflect that state, then no argument from me. Premise (3) is probably the premise that makes the biggest assumption and thus requires the most justification. An objection could be made that:
If we accept premise (4), then either it is not the case that the meanings of words we use to express these states are correct, or our intentional states wholly and correctly reflect our internal states.
I accept premise (4).
So, either it is not the case that the meanings of words we use to express these states are correct, or our intentional states wholly and correctly reflect our internal states.
This thus implies that, if the meanings of words we use to express these states are correct, then our intentional states wholly and correctly reflect our internal states.
This seems okay to me. When we use words, via expressions to express things, it is if the words are correct that we then say that they are expressing the right state. If we use the wrong words via expressions then we don't say that they express the right state, furthermore we don't say that when we get the words wrong, that is use the wrong expression, that it is the fault, or that there is some sort of incorrect reflection of what is going with the states inside our heads that make this happen. We are just getting the words wrong. This is undoubtedly not a great example, but ... say I express the sentence "I am the king of the world!". To claim that my misstep in expression, i.e. that I am using the wrong words like being a king when I darn well know that kings are male (n.b. and I am not) and that kings do not rule the whole world, means that my intentional sates are incorrectly reflecting my internal states, is bunk. Even if I was unaware of the correct definition of what a king is or what it feels like to be the one of the whole world, does not mean that I am incorrectly attributing my expression to some feeling of grandiose and regal superiority. In short, the entailment between expressions and intentional / internal states, used in favour of defending wide mental content vs. narrow content, I believe (ha!) is far too high of an infallibility standard to put expressions up to.
(1) Every vixen is a vixen.
UAt: Necessarily, whoever grasps the thought every vixen is a vixen assents to it.
UDAt: Necessarily, whoever grasps the thought every vixen is a vixen has a disposition to assent to it.
Williamson wants to know if we can maintain UAt or UDAt while acknowledging UA1 and UDA1 fails by the Stephen and Peter counterexample.
(2) Stephen assents to (1) if and only if he assents to the though every vixen is a vixen.
(3) He doesn’t assent to (1) so he doesn’t assent to the thought every vixen is a vixen.
(4) It follows that if t is the thought every vixen is a vixen then Stephen doesn’t assent to the thought every vixen is a vixen and is also a counterexample to UAt.
(5) If the thought that he associates with the sentence ‘Every vixen is a vixen’ is not the thought every vixen is a vixen then he is not a counterexample
(6) Using the word ‘thought’ indicates that inferential differences between Peter and Stephen and us represent differences between the thoughts we associate with (1). Peter and Stephen associate different thoughts then our own when assenting to (1).
(7) If (6), then we should translate their idiolects non-homophonically into ours.
(8) A translation scheme would invoke refusal to acknowledge the full challenge which Peter and Stephen have issued to (1).
(9) To claim (6) then the disagreement between Peter and Stephen and us sounds less threatening because it bundles together logical and semantic differences without unification.
(10) So, the links from understanding to assent and dispositions to assent fail for thought like they do for language.
For reasons other than (8) I think (6) fails because it separates thought from language. If this should be the case then the thought someone associates with a sentence isn’t just the thought of that sentence. This seems kind of strange to accept. I think Williamsons argument for failing UAt and UDAt is a good one.