Logic has often been supposed capable of more than it really is, and airtight logical validity is often considered the ultimate standard for good reasoning.
The Medieval church even thought it able to demonstrate by its proofs alone something as important as the existence of God, thinking partly resting on the infallibility of Aristotle the authority. It was thought by such as Rene Descartes to have ultimate metaphysical grounding with God Himself as its guarantor.
I think that this was and is mistaken.
Don’t get me wrong, logic’s crucially important. It’s indispensable for formulating predictions for scientific hypotheses, it’s used in computers, and more generally throughout mathematics and philosophy.
Modern logic has moved on from the old Aristotelian model since developments in the 19th century, and many new systems of logic have arisen since then. Even inductive reasoning became widely used with the beginning of the scientific revolution in the early Modern period, and formal rules for it were developed by none other than John Stewart Mill in the 19th century, rules we still use in scientific inquiry because they work — they reliably though not infallibly get us where we need to go in gaining new knowledge.
But what’s so bad about deduction? Nothing if we use it for what it’s designed to do — a fallible system designed by fallible human beings with a tragically fallible grasp of logic. The trouble begins if we give it more credit than it’s due, or apply it outside of its domain of use.
Logic — and I mean pure deduction here — is dependent entirely for it’s usefulness on the rules we stipulate for the system, and these depend on what you intend to do with it. Different modern logics use different rules and are used for things that some older systems are inadequate for.
There are the close descendants of Aristotelian logic — sentential and predicate logics. There are fuzzy logic, three-valued logic, and quantum logic, and the list gets longer with each new development by modern logicians.
We recognize the limits of logic, the need for different but internally consistent ways of working it, and that logic rests, not on some ultimate bedrock of metaphysical certitude, but upon what rules we formulate to reliably derive our output. As long as we apply those rules consistently within a system, and as long as the output and the logic itself withstand the tests we apply, all is well.
But the rules of every system we use depend entirely on human-discovered conventions, arbitrary but useful rules that are hardly set in stone, chosen as required for purposes we try to carry out using that system whatever it may be.
It seems to me that even airtight logical validity rests on what is both assumed and reliable in the right domain of knowledge, but we cannot use reasoning of any form without making those assumptions, even when the assumptions, the rules, don’t really obey any necessary first principles.
Airtight logical validity depends on truth-preservation, and that prevents it from telling us anything we don’t already start with, imply, or assume. Hence inductive reasoning, hence science. The best we may hope for is to use any system of reasoning for what it’s built to do, consistently using those conventions we discover and test by experience and experiment.
Logicians discover the rules of patterns of reasoning that show themselves reliable, but the rules of logic, as with those of science — and this is a paraphrasing of Professor Emeritus James Hall of the University of Richmond — ‘…don’t have to obey themselves.’
But I’m of the sense that nothing we can say we truly know has ever rested on any sort of unshakeable foundation independent of human discovery and convention. I’m of the sense that that is a chimera, and requiring that for any area of knowledge is fatuous and apt to lead more to great error than great truth.
We humans have a sense of reason, and with training and experience we may learn to do it well. But we cannot do it perfectly, nor do we know how, and we may as well just deal with the fact that everything we can know about the world and ourselves is open to correction at some future point.
Mistaken knowledge with absolute conviction behind it is to me much, much more dangerous than simple ignorance, no matter how sure are we of what we may think we know.
“I think that asking ‘what can we know for certain’ frames the quest for knowledge the wrong way. It presupposes that we can know contingent matters of fact with certainty. It presupposes that we must know things with certainty to know anything at all. Neither presupposition has actually been demonstrated in fact.
More useful to me is that given the inherent limits of a finite data set, is to think in probabilities instead of seeking final(and I think premature)closure. More useful to me is to ask ‘What can we confidently say we know at the moment, even when we may well be shown wrong with better data and more cogent arguments in the future.'” ~ Aloysius Hawthorne McGrath, fictional paleontologist and Lovecraftian ecologist.