Supponi che non ci sia un’intelligenza creatrice dell’universo. In questo caso, nessuno avrebbe progettato il mio cervello al fine di pensare. Il pensiero sarebbe solo un prodotto collaterale dell’evoluzione materiale. Ma, se fosse davvero così, come potrei mai credere che i miei pensieri sono veri? E’ questo che rende inaffidabile l’ipotesi atea: se non credo in Dio non posso credere in me stesso, nemmeno quando confuto l’esistenza di Dio.
L’argomento sembra buono per certi atei, un po’ meno per altri. Ma anche a quest’ultimi viene richiesto un atto di fede sul ruolo positivo della verità nella sopravvivenza del più adatto.
L’argomento sembra buono per certi atei, un po’ meno per altri. Ma anche a quest’ultimi viene richiesto un atto di fede sul ruolo positivo della verità nella sopravvivenza del più adatto.
I was just teaching Descartes, and we covered his view that atheists lack knowledge of the external world (and perhaps even of mathematics). That's because, unless you believe in God, you have no reason to believe that clear & distinct perceptions are true, etc.
This is very similar to Plantinga's argument against "Naturalism", in which Plantinga maintains that, in the absence of God, there's no reason to think that evolution would have designed us with reliable cognitive faculties (https://en.wikipedia.org/…/Evolutionary_argument_against_na…). From here, the theist might say either:
a. This shows that evolutionary naturalism is self-defeating, and therefore it rationally must be rejected. Or:
b. Given the fact that our faculties are reliable, theism just provides the best explanation, so that is evidence for theism.
b. Given the fact that our faculties are reliable, theism just provides the best explanation, so that is evidence for theism.
What is wrong with this argument? I have several thoughts, but I’ll just emphasize one for now: getting adaptive behavior through false beliefs is not nearly as easy as Plantinga thinks. He imagines a case in which you see a tiger, and the adaptive behavior is to run away. This could be brought about by a desire to avoid being eaten + a belief that running away would prevent being eaten. Or it could be brought about equally well by a desire to be eaten + a belief that running away will help you be eaten. (Simplifying.)
Now, this might be fine if the only question the agent forms beliefs on is how to get eaten, and the only decision the agent ever makes is to run away or not run away from that tiger. But if there are other issues and decisions, then the person will need a belief system that is *extendable*: they will have to have a cognitive faculty that produces beliefs that not only are adaptive, but also will continue to be adaptive as more beliefs are added by that same mechanism.
One way of achieving this is to start from true beliefs, and to make valid (or at least highly cogent) inferences. Then you’ll get more true beliefs.
It is far from clear what the alternative mechanism might be. It mustn’t produce beliefs randomly, since then adaptive behavior would be rare. So one must try to think of a systematic mechanism that, when confronted with the sort of evidence we actually have, produces the belief “running away from tigers helps you get eaten by them”, and also continues to produce adaptive behavior as a wide range of different, largely unpredictable things happen to the agent. That’s really hard.
Ex.: If the person thinks that running from a tiger helps you get eaten, then when he himself is trying to catch an animal to eat it, will he also try to ensure that the animal runs away? Tell me the systematic mechanism that makes things work out in these and other situations.
I think this is the best response to Plantinga, because it’s the response that is most illuminating about how the system actually works, and why we actually have reliable faculties.
(I got something like this from Christopher Stephens,http://faculty.arts.ubc.ca/…/C.%20Stephens%20Selectively%20…)