Criteria of Adequacy


People often harbor the misconception that science is just an archive, a library, a stagnant body of facts, or a belief system — a collection of truths, a particular worldview, an ideology, and not the vibrantly active, contentious, competitive, and continually advancing search for knowledge it is, with the ability it gives us as a species to enhance our understanding of the world and ourselves.MB3D33REG

Science isn’t the way it is today because some patriarchal Europeans during the Renaissance made some sh*t up and arbitrarily decided that that’s the way it will be for all time — It’s the way it is today, rather more different from how it was then, because that’s what’s been shown over time to work, what gets the best results right now.

Science is an almost Darwinian entity, and so evolves over time, those methodologies and philosophical underpinnings that work are adopted and retained, and the ones that turn out not to are just ditched. Science isn’t perfect, and it probably never will be, but it is progressive. And it’s the only human endeavor that’s designed from the bottom-up to be internally self-correcting.

Despite the occasional fraud or fabrication, the truth prevails. While individual scientists are no more paragons of moral virtue than the rest of us, science as a whole is self-policing. Because scientists like to try to dismantle each others’ theories, if one scientist isn’t honest, others will be. Propose a phony theory of astrophysics, and it will be exposed by a rival. In almost every instance, fraud or error in science are uncovered and vigorously called out by scientists themselves.

Any useful theory in science can have one or several, but often many more supporting ideas, each serving the purpose of a predictor, more properly, a hypothesis, and you need observational facts as well as logic to round a theory out, since it’s a bad idea to try to theorize on an empty mind, but even this just isn’t enough: you have to be able to go a wee bit further than what factoids you know.

To be of any worth, a theory should meet at least two or more conditions called Criteria of Adequacy — specifically a set of five that for purposes of this post will be known as Testability, Fruitfulness, Scope, Simplicity, and Conservatism. We’ll deal with them each in turn…

Testability…

…One great way to tell genuine scientific theories from pseudoscientific ones is by whether or not they can be tested, and any functional hypotheses within a theory must have this property in order to be worth anything — if it doesn’t, well, it just doesn’t measure up as science.

Karl Popper’s idea that any scientific theory had to be testable to be valid was mostly sound, though there was a problem with his use of the word falsifiability for it is that strict, conclusive falsifiability or verification, final proof nay or yea, aren’t possible in science. This is because there is no way to be certain that new data won’t turn up in the future that could refute a hypothesis, and you can always rescue any hypothesis in spite of evidence by toying around here and there with the theory it belongs to. That, and the fact that almost any new theory is already seemingly refuted by a lot of the data available at the time it is first conceived.

Hypotheses can’t be tested all by their lonesome, only with others that make up the basic theory they are part of. Thus, even ‘reductionistic’ hypothesis-testing is holistic in the truest possible sense, since it is done in bundles of hypotheses…

Scientifically functional hypotheses should go further than the predictions that they make with the theory that they’re meant to support, and a hypothesis that doesn’t is what’s known as an ad hoc hypothesis, ad hoc, because my evil self is gonna go all Latin on you, means (in)this case only, and a grunchload of ad hoc hypotheses in a theory is a really good indicator that it is pseudoscience.

Hypotheses let us predict things by telling us what we should observe under what set of conditions, in order to provisionally confirm or confute them. Ad hoc hypotheses, on the other hand, don’t improve upon our understanding by telling us anything we don’t already know. A given hypothesis is of no scientific value if it cannot be tested against that most heinous of taskmasters, reality. If a hypothesis makes predictions on what we can and should observe that its own base theory doesn’t, or can’t, then it’s testable.

Let’s look at a sample hypothesis, the pixie hypothesis of home computer networks, which states that when one boots up the network, tiny pixies living in the the computer, DSL modem, and router flit around at near-light speeds inside the machinery and carry signals between the different circuits to make the computer work, and fly around at light-speed outside the machinery to carry Wi-Fi signals to all the laptops in the network.

As mentioned earlier, there can be any number of hypotheses in a theory, such as the blue LED pixie hypothesis, the green LED pixie hypothesis, the LCD screen pixie hypothesis, and so on, but the pixie hypothesis’ usefulness for scientific purposes depends on what it tells us about pixies, what it predicts we should observe.

Referring back to the base theory and trying to prove or disprove the existence of the pixies by booting up the network does us no good, for this is a tautology — circular reasoning — and the very thing that the pixie hypothesis is meant to explain. It’s obvious that we have to go beyond the basic theory.

Now if this hypothesis tells us that the pixies are visible or tangible or audible, we can just look inside the casing of the computer and network hardware to see or feel or listen around for signs of the pixies. If the hypothesis tells us the they are normally intangible and invisible and silent, but can be seen and touched or heard when the computer’s custom-built suped-up liquid coolant system is in overdrive, we can crank up the coolant system to make them visible. This hypothesis just doesn’t do us any good, though, and is not testable if it says that the pixies are always invisible, intangible, and produce no sound, not even the chattering of little pixie teeth induced by the chill of the supercooled computer.

Yes, I know — that was silly. It’s a general rule that to qualify as scientifically interesting — and valid — any hypothesis must observably predict more than what the theory it belongs to does, assuming that all other properties of the hypothesis are the same in worth.

But testability isn’t the only important factor, since we impart more worth to some hypotheses than others. We need to take more than just testability into account, and the next criterion to be considered is…

…Fruitfulness…

…which is a valid condition of a still worthwhile hypothesis and may suffice to rescue it even in the face of contrary evidence, since it lets said hypothesis successfully predict new observational data and to create, often without any initial foresight, entirely unexpected lines of research. If a hypothesis predicts more new and unexpected findings than others, all other factors being roughly the same in importance, then it is the best.

Oddly enough, this is true even if a hypothesis is tested and found to be false. Even in such a case, an incorrect but both interesting and useful, and therefore fruitful hypothesis can sometimes serendipitously lead to new discoveries, because of a number of factors, such as the researcher’s imagination, observational skill, and ability to take advantage of opportunities thrown their way by the winds of random happenstance.

But there are also fields of study that qualify as degenerating research programs, involving theories and hypotheses that are most obviously not fruitful, highly unproductive where pioneering research is concerned, that even if they aren’t limited severely in the phenomena they study, they predict not all that much in the way of new findings, and are largely unsuccessful in their predictions at best. And no, post hoc rationalizations and shoehorned postdictions don’t count.

Parapsychology is a good example of one such field, as it has never succeeded in predicting and actually revealing any new and unexpected observations, no practical applications for either ESP or PK, and no new facts excepting ingeniously contrived excuses as to why even its most cutting-edge research protocols don’t independently replicate when non-believers in psi are involved in the experiment.

Even to this day, after over 130 years of research, it is riddled with ad hoc hypotheses, such as the decline effect, the observer effect, psi-missing, and even bizarre claims of the retroactive skepticism of readers of parapsychology journals reaching back through time to affect (previously) successful experiments in the past.

In fact, despite largely unsuccessful attempts to co-opt quantum mechanics and other poorly-understood ideas of bleeding-edge physics for the purpose, such as zero-point energy fields and string theory, parapsychology still lacks a sound consensus on any coherent theoretical underpinnings.

Most of the claims of parapsychology violate much of what we can honestly say is currently known in biology, physics and psychology, three fields that it would have revolutionized had it been as successful as some of it’s advocates sometimes claim, and as successful as its pioneers would have wished it to become, given the time it’s had.

This is not to say that Psi violates laws of nature in any absolute sense, but it does appear to violate those laws as we presently understand them. Our understanding of these laws may indeed be incorrect, or incomplete, but unless parapsychologists can identify the ones that are, and demonstrate new laws with observational data that explain the universe better than the current ones, we have no good cause to suspect that currently known laws are wrong.

Scope:

This is a crucial component of any theory with wide applications, how capable it is of organizing and putting our understanding of that which it describes all in the same convenient package, and this also has the bennies of reducing the probability of the theory being wrong. The superior theory is that which predicts and explains the widest range of phenomena, all other factors being the same in importance.

In my Gods of Terra science fiction setting, the discovery of Kurtz-Dunar Hypermatrix Theory (or KDHT for short) finally unified the older theoretical paradigms of Quantum Mechanics and Einsteinian Relativity into a fully integrated, coherent whole, incorporating more precise and deeper understanding of the first four forces they dealt with — Gravity, Electromagnetism, the Strong and Weak forces, and in addition, the Cosmological Constant, or Dark energy — and all of the various phenomena they governed, in addition to resolving any conflicts that had arisen in its predecessor theories.

KDHT was a distant descendant of String Theory, but one that had arisen when the technology of the day was up to the task of testing its predictions, which finally allowed humanity, and any similarly developed technological species, access to the Superforce and its technological applications, using it to more precisely manipulate its component forces and phenomena under their purview. It also had, in addition to the virtues of Testability, Fruitfulness and Scope, that of…

…Simplicity…

…which deals with a theory’s elegance and logical consistency. Generally, assuming everything else being about the same, the theory with the greatest logical coherence and the fewest unnecessary assumptions is the better.

Going back to our last example, Raoul Kurtz and Ranan Dunar’s highly successful Hypermatrix Theory was especially liked by its co-founders because of its parsimony and elegance, since not only was their idea tested and provisionally verified shortly after its conception, not only did it lead to new and surprising avenues of research, not only did it allow humanity relatively easy access to interstellar travel, cheap surface-to-orbit transit and biologically friendly long-duration space voyages through its tremendous applicability to a wide range of phenomena, its simplicity allowed for fewer possible ways to falsify it, thus making it more likely to be true when first formulated.

Simplicity allowed this “theory of almost everything” to stand apart from its more cumbersome competitors, and this criterion has been justly esteemed in the real world since the days of the Ionian Awakening in classical Greece, starting historically with Thales of Miletus.

You’ve likely kind of noticed how hypotheses explain what they do by postulating the existence of certain things, and simplicity tells us that it’s a good idea to resort to the use of the rule of thumb called Occam’s razor, which states that ‘Entities should not be multiplied without necessity.’

It’s important to consider the fact that assuming the existence of something without a really good reason is not a logical thing to do.

But even the revolutionary impact of Hypermatrix Theory, new as it was, also had to abide by one more criterion, the final one in this post, that of…

…Conservatism…

…which deals with a character of sound scientific hypotheses concerned with the consistency of new ideas with prior knowledge.

This is an important feature for what we can honestly say we know, and a ginormous red flag should pop up in one’s head about any sort of claims that conflict with much of what we have good reason to think we know, especially if what we know at present results in the creation of technologies and techniques that actually work, like the computer server that hosts this blog.

Unthinking acceptance of inconsistent ideas both erodes and forces us to reject what we know without sound reason. The plausibility of ideas that violate Conservatism is probably not very high if they go against applications of established knowledge that have real practical benefits.

Overall, a more conservative hypothesis is more plausible, more useful, and most closely fits previous valid claims to what we know, provided other criteria are of equal standing.

Even though KDHT led to a new and more powerful understanding of the universe, allowing mankind to tap the Superforce and spread across interstellar space, the properties of the Superforce, while some where specific to it, did not contravene those properties of its sub-forces, nor violate the new, deeper understanding of the older Quantum and Relativity theories, for example: Superforce radiation does not exceed the speed of light, traveling at roughly 300,000 Kilometers per second in a vacuum, falls off in strength over distances in accordance with the inverse-square law, and when sublimating into any of its component forces, obeys all of their physical properties, and finally, obeys Einstein’s law of E=mc² and all of the laws of Thermodynamics. It does not allow one to violate physical laws that still enjoy empirical support in the science of the Gods of Terra setting, rather allowing one instead to make use of those not previously known or otherwise poorly understood at best.

However, not all hypotheses are of equal worth, and it’s rational to accept an idea that doesn’t abide by one criterion as long as it abides by others.

Much to my Troythuluness’s regret, there is no such thing as a completely ironclad way to tell when any criterion should be outranked by others, and there is no formal methodology for applying them. There is no known way precisely measure the various elements of a hypothesis and no known means by which a formal ranking system may be applied to any of them.

We just might, for example, conclude that Conservatism should have a greater rank than, say, Fruitfulness, if the idea under consideration has a relatively narrow scope. Or Conservatism may be outranked by Simplicity and Scope, in particular if said hypothesis has a great deal of the latter, though Testability is a must.

Hypothesis selection is not a strict, mechanistic process involving rigid logic, and like any process of decision-making, like the proceedings of a court of law, much less the court of science, requires the exercise of our ability for sound judgment employing methods themselves not very amenable to formal conventions, though this process isn’t completely subjective either: There are processes that we can’t easily gauge that are nonetheless objective.

For example, it is not possible to strictly delineate the exact cut-off point at which light becomes dark, or at which the wavelength and frequency of red light becomes that of orange light, though it would be absurd to claim that these things cannot be distinguished from each other, with the difference between the extreme ends of these spectra, these wavelengths of light, or light and dark, being as objective as far as it goes.

Since most distinctions range along a continuum instead of there being a strict split between them, with a fuzzy but real difference, it would not be rational to argue that that because there is no arbitrary demarcation between light and dark, that the difference between them does not exist and that therefore they are the same. To suppose this is highly specious reasoning, the commission of the False Continuum fallacy.

It is also wrong to believe, for example, that spontaneous generation, alchemy, phrenology, vitalism, or luminiferous ether theory are still valid scientific ideas even if they were at one point. And I know of no diplomatic way to say this: To steadfastly adhere to a claim of fact, belief system or doctrine that isn’t supported by any of the criteria discussed in this post is to hold irrational views. Fnord.

References-
(How To Think About Weird Things: Critical Thinking For A New Age, 4th Edition (pp. 187-197) by Theodore Schick, Jr. & Lewis Vaughn)

(The Art of Scientific Investigation, First Printing (pp. 56-71) by W.I.B. Beveridge)

Criteria of Adequacy


People often harbor the misconception that science is just an archive, a library, a stagnant body of facts, or a belief system — a collection of truths, a particular worldview, an ideology, and not the vibrantly active, contentious, competitive, and continually advancing search for knowledge it is, with the ability it gives us as a species to enhance our understanding of the world and ourselves.MB3D33REG

Science isn’t the way it is today because some patriarchal Europeans during the Renaissance made some sh*t up and arbitrarily decided that that’s the way it will be for all time — It’s the way it is today, rather more different from how it was then, because that’s what’s been shown over time to work, what gets the best results right now.

Science is an almost Darwinian entity, and so evolves over time, those methodologies and philosophical underpinnings that work are adopted and retained, and the ones that turn out not to are just ditched. Science isn’t perfect, and it probably never will be, but it is progressive. And it’s the only human endeavor that’s designed from the bottom-up to be internally self-correcting.

Despite the occasional fraud or fabrication, the truth prevails. While individual scientists are no more paragons of moral virtue than the rest of us, science as a whole is self-policing. Because scientists like to try to dismantle each others’ theories, if one scientist isn’t honest, others will be. Propose a phony theory of astrophysics, and it will be exposed by a rival. In almost every instance, fraud or error in science are uncovered and vigorously called out by scientists themselves.

Any useful theory in science can have one or several, but often many more supporting ideas, each serving the purpose of a predictor, more properly, a hypothesis, and you need observational facts as well as logic to round a theory out, since it’s a bad idea to try to theorize on an empty mind, but even this just isn’t enough: you have to be able to go a wee bit further than what factoids you know.

To be of any worth, a theory should meet at least two or more conditions called Criteria of Adequacy — specifically a set of five that for purposes of this post will be known as Testability, Fruitfulness, Scope, Simplicity, and Conservatism. We’ll deal with them each in turn…

Testability…

…One great way to tell genuine scientific theories from pseudoscientific ones is by whether or not they can be tested, and any functional hypotheses within a theory must have this property in order to be worth anything — if it doesn’t, well, it just doesn’t measure up as science.

Karl Popper’s idea that any scientific theory had to be testable to be valid was mostly sound, though there was a problem with his use of the word falsifiability for it is that strict, conclusive falsifiability or verification, final proof nay or yea, aren’t possible in science. This is because there is no way to be certain that new data won’t turn up in the future that could refute a hypothesis, and you can always rescue any hypothesis in spite of evidence by toying around here and there with the theory it belongs to. That, and the fact that almost any new theory is already seemingly refuted by a lot of the data available at the time it is first conceived.

Hypotheses can’t be tested all by their lonesome, only with others that make up the basic theory they are part of. Thus, even ‘reductionistic’ hypothesis-testing is holistic in the truest possible sense, since it is done in bundles of hypotheses…

Scientifically functional hypotheses should go further than the predictions that they make with the theory that they’re meant to support, and a hypothesis that doesn’t is what’s known as an ad hoc hypothesis, ad hoc, because my evil self is gonna go all Latin on you, means (in)this case only, and a grunchload of ad hoc hypotheses in a theory is a really good indicator that it is pseudoscience.

Hypotheses let us predict things by telling us what we should observe under what set of conditions, in order to provisionally confirm or confute them. Ad hoc hypotheses, on the other hand, don’t improve upon our understanding by telling us anything we don’t already know. A given hypothesis is of no scientific value if it cannot be tested against that most heinous of taskmasters, reality. If a hypothesis makes predictions on what we can and should observe that its own base theory doesn’t, or can’t, then it’s testable.

Let’s look at a sample hypothesis, the pixie hypothesis of home computer networks, which states that when one boots up the network, tiny pixies living in the the computer, DSL modem, and router flit around at near-light speeds inside the machinery and carry signals between the different circuits to make the computer work, and fly around at light-speed outside the machinery to carry Wi-Fi signals to all the laptops in the network.

As mentioned earlier, there can be any number of hypotheses in a theory, such as the blue LED pixie hypothesis, the green LED pixie hypothesis, the LCD screen pixie hypothesis, and so on, but the pixie hypothesis’ usefulness for scientific purposes depends on what it tells us about pixies, what it predicts we should observe.

Referring back to the base theory and trying to prove or disprove the existence of the pixies by booting up the network does us no good, for this is a tautology — circular reasoning — and the very thing that the pixie hypothesis is meant to explain. It’s obvious that we have to go beyond the basic theory.

Now if this hypothesis tells us that the pixies are visible or tangible or audible, we can just look inside the casing of the computer and network hardware to see or feel or listen around for signs of the pixies. If the hypothesis tells us the they are normally intangible and invisible and silent, but can be seen and touched or heard when the computer’s custom-built suped-up liquid coolant system is in overdrive, we can crank up the coolant system to make them visible. This hypothesis just doesn’t do us any good, though, and is not testable if it says that the pixies are always invisible, intangible, and produce no sound, not even the chattering of little pixie teeth induced by the chill of the supercooled computer.

Yes, I know — that was silly. It’s a general rule that to qualify as scientifically interesting — and valid — any hypothesis must observably predict more than what the theory it belongs to does, assuming that all other properties of the hypothesis are the same in worth.

But testability isn’t the only important factor, since we impart more worth to some hypotheses than others. We need to take more than just testability into account, and the next criterion to be considered is…

…Fruitfulness…

…which is a valid condition of a still worthwhile hypothesis and may suffice to rescue it even in the face of contrary evidence, since it lets said hypothesis successfully predict new observational data and to create, often without any initial foresight, entirely unexpected lines of research. If a hypothesis predicts more new and unexpected findings than others, all other factors being roughly the same in importance, then it is the best.

Oddly enough, this is true even if a hypothesis is tested and found to be false. Even in such a case, an incorrect but both interesting and useful, and therefore fruitful hypothesis can sometimes serendipitously lead to new discoveries, because of a number of factors, such as the researcher’s imagination, observational skill, and ability to take advantage of opportunities thrown their way by the winds of random happenstance.

But there are also fields of study that qualify as degenerating research programs, involving theories and hypotheses that are most obviously not fruitful, highly unproductive where pioneering research is concerned, that even if they aren’t limited severely in the phenomena they study, they predict not all that much in the way of new findings, and are largely unsuccessful in their predictions at best. And no, post hoc rationalizations and shoehorned postdictions don’t count.

Parapsychology is a good example of one such field, as it has never succeeded in predicting and actually revealing any new and unexpected observations, no practical applications for either ESP or PK, and no new facts excepting ingeniously contrived excuses as to why even its most cutting-edge research protocols don’t independently replicate when non-believers in psi are involved in the experiment.

Even to this day, after over 130 years of research, it is riddled with ad hoc hypotheses, such as the decline effect, the observer effect, psi-missing, and even bizarre claims of the retroactive skepticism of readers of parapsychology journals reaching back through time to affect (previously) successful experiments in the past.

In fact, despite largely unsuccessful attempts to co-opt quantum mechanics and other poorly-understood ideas of bleeding-edge physics for the purpose, such as zero-point energy fields and string theory, parapsychology still lacks a sound consensus on any coherent theoretical underpinnings.

Most of the claims of parapsychology violate much of what we can honestly say is currently known in biology, physics and psychology, three fields that it would have revolutionized had it been as successful as some of it’s advocates sometimes claim, and as successful as its pioneers would have wished it to become, given the time it’s had.

This is not to say that Psi violates laws of nature in any absolute sense, but it does appear to violate those laws as we presently understand them. Our understanding of these laws may indeed be incorrect, or incomplete, but unless parapsychologists can identify the ones that are, and demonstrate new laws with observational data that explain the universe better than the current ones, we have no good cause to suspect that currently known laws are wrong.

Scope:

This is a crucial component of any theory with wide applications, how capable it is of organizing and putting our understanding of that which it describes all in the same convenient package, and this also has the bennies of reducing the probability of the theory being wrong. The superior theory is that which predicts and explains the widest range of phenomena, all other factors being the same in importance.

In my Gods of Terra science fiction setting, the discovery of Kurtz-Dunar Hypermatrix Theory (or KDHT for short) finally unified the older theoretical paradigms of Quantum Mechanics and Einsteinian Relativity into a fully integrated, coherent whole, incorporating more precise and deeper understanding of the first four forces they dealt with — Gravity, Electromagnetism, the Strong and Weak forces, and in addition, the Cosmological Constant, or Dark energy — and all of the various phenomena they governed, in addition to resolving any conflicts that had arisen in its predecessor theories.

KDHT was a distant descendant of String Theory, but one that had arisen when the technology of the day was up to the task of testing its predictions, which finally allowed humanity, and any similarly developed technological species, access to the Superforce and its technological applications, using it to more precisely manipulate its component forces and phenomena under their purview. It also had, in addition to the virtues of Testability, Fruitfulness and Scope, that of…

…Simplicity…

…which deals with a theory’s elegance and logical consistency. Generally, assuming everything else being about the same, the theory with the greatest logical coherence and the fewest unnecessary assumptions is the better.

Going back to our last example, Raoul Kurtz and Ranan Dunar’s highly successful Hypermatrix Theory was especially liked by its co-founders because of its parsimony and elegance, since not only was their idea tested and provisionally verified shortly after its conception, not only did it lead to new and surprising avenues of research, not only did it allow humanity relatively easy access to interstellar travel, cheap surface-to-orbit transit and biologically friendly long-duration space voyages through its tremendous applicability to a wide range of phenomena, its simplicity allowed for fewer possible ways to falsify it, thus making it more likely to be true when first formulated.

Simplicity allowed this “theory of almost everything” to stand apart from its more cumbersome competitors, and this criterion has been justly esteemed in the real world since the days of the Ionian Awakening in classical Greece, starting historically with Thales of Miletus.

You’ve likely kind of noticed how hypotheses explain what they do by postulating the existence of certain things, and simplicity tells us that it’s a good idea to resort to the use of the rule of thumb called Occam’s razor, which states that ‘Entities should not be multiplied without necessity.’

It’s important to consider the fact that assuming the existence of something without a really good reason is not a logical thing to do.

But even the revolutionary impact of Hypermatrix Theory, new as it was, also had to abide by one more criterion, the final one in this post, that of…

…Conservatism…

…which deals with a character of sound scientific hypotheses concerned with the consistency of new ideas with prior knowledge.

This is an important feature for what we can honestly say we know, and a ginormous red flag should pop up in one’s head about any sort of claims that conflict with much of what we have good reason to think we know, especially if what we know at present results in the creation of technologies and techniques that actually work, like the computer server that hosts this blog.

Unthinking acceptance of inconsistent ideas both erodes and forces us to reject what we know without sound reason. The plausibility of ideas that violate Conservatism is probably not very high if they go against applications of established knowledge that have real practical benefits.

Overall, a more conservative hypothesis is more plausible, more useful, and most closely fits previous valid claims to what we know, provided other criteria are of equal standing.

Even though KDHT led to a new and more powerful understanding of the universe, allowing mankind to tap the Superforce and spread across interstellar space, the properties of the Superforce, while some where specific to it, did not contravene those properties of its sub-forces, nor violate the new, deeper understanding of the older Quantum and Relativity theories, for example: Superforce radiation does not exceed the speed of light, traveling at roughly 300,000 Kilometers per second in a vacuum, falls off in strength over distances in accordance with the inverse-square law, and when sublimating into any of its component forces, obeys all of their physical properties, and finally, obeys Einstein’s law of E=mc² and all of the laws of Thermodynamics. It does not allow one to violate physical laws that still enjoy empirical support in the science of the Gods of Terra setting, rather allowing one instead to make use of those not previously known or otherwise poorly understood at best.

However, not all hypotheses are of equal worth, and it’s rational to accept an idea that doesn’t abide by one criterion as long as it abides by others.

Much to my Troythuluness’s regret, there is no such thing as a completely ironclad way to tell when any criterion should be outranked by others, and there is no formal methodology for applying them. There is no known way precisely measure the various elements of a hypothesis and no known means by which a formal ranking system may be applied to any of them.

We just might, for example, conclude that Conservatism should have a greater rank than, say, Fruitfulness, if the idea under consideration has a relatively narrow scope. Or Conservatism may be outranked by Simplicity and Scope, in particular if said hypothesis has a great deal of the latter, though Testability is a must.

Hypothesis selection is not a strict, mechanistic process involving rigid logic, and like any process of decision-making, like the proceedings of a court of law, much less the court of science, requires the exercise of our ability for sound judgment employing methods themselves not very amenable to formal conventions, though this process isn’t completely subjective either: There are processes that we can’t easily gauge that are nonetheless objective.

For example, it is not possible to strictly delineate the exact cut-off point at which light becomes dark, or at which the wavelength and frequency of red light becomes that of orange light, though it would be absurd to claim that these things cannot be distinguished from each other, with the difference between the extreme ends of these spectra, these wavelengths of light, or light and dark, being as objective as far as it goes.

Since most distinctions range along a continuum instead of there being a strict split between them, with a fuzzy but real difference, it would not be rational to argue that that because there is no arbitrary demarcation between light and dark, that the difference between them does not exist and that therefore they are the same. To suppose this is highly specious reasoning, the commission of the False Continuum fallacy.

It is also wrong to believe, for example, that spontaneous generation, alchemy, phrenology, vitalism, or luminiferous ether theory are still valid scientific ideas even if they were at one point. And I know of no diplomatic way to say this: To steadfastly adhere to a claim of fact, belief system or doctrine that isn’t supported by any of the criteria discussed in this post is to hold irrational views. Fnord.

References-
(How To Think About Weird Things: Critical Thinking For A New Age, 4th Edition (pp. 187-197) by Theodore Schick, Jr. & Lewis Vaughn)

(The Art of Scientific Investigation, First Printing (pp. 56-71) by W.I.B. Beveridge)

Baloney Detection 101 – Scientific Theory


A scientific theory, as opposed to the everyday use of the word “theory,” is more than just a guess, and it isn’t, as Isaac Asimov once quipped, something you came up with while drunk.

It’s a set of ideas that weaves facts together into a single overall description and detailed explanation for a given set of phenomena.

All scientific theories are provisional, never proven with complete metaphysical certainty, and are sometimes demonstrably factual, but it’s important to tell a theory from the facts it describes.

The scientific use of a theory gives no a priori indication of its actual level of certainty, but any given set of ideas might be so well established by repeated testing as to be confirmed beyond all rational doubt. There are the theories of genetic inheritance, general and special relativity, quantum mechanics, number theory in mathematics, music theory in music, stress theory in engineering, the germ theory of disease, atomic theory, heliocentric theory, the global Earth theory, plate tectonics theory, and of course, that boogieman of creationists, evolution.

Booga-Booga! Eeevilution!

And not everyone’s doubt is rational, with various sorts of science denialists given to labeling any set of theories they have a bug up their posteriors about as “just theory, not fact,'” playing on the everyday use of both the words ‘theory,’ and ‘fact.’

A theory is not a hypothesis; the latter is just a part of a theory,a proposed explanation with a given set of predictions within a theory’s framework. That’s what you’d expect to see, or not see, if that part of the theory is to be tested.

Theories aren’t facts; they explain and describe facts. And facts are not certainties, outside of formal logic and maths. Never confuse those.

Facts in science are never absolute, due to science’s provisional nature. It can never be known absolutely that some data which might disconfirm any particular fact will never rear its ugly head at any arbitrary point in the future.

Theories are not ‘promoted’ to laws, they being two different sorts of beasts – laws merely define things, give a mathematical structure to a phenomenon we can use in applying it, while a theory describes and explains how it works.

Theories usually start as models, which offer testable hypotheses for experiment or other observation; there’s the comparative method in mostly historical sciences: geology, cliodynamics, cosmology, astronomy, paleontology, and archaeology to name a few.

No, you don’t have to do experiments in a lab to do science. Otherwise, no crime ever committed could be solved using evidence left at the scene where it happened, and detectives would be permanently out of work as a profession.

Science isn’t just for the nerds in lab coats and pocket-protectors.

Most science today is done as a community effort, evolving over time, and all involved in a study contribute to the overall theory being investigated and hypotheses tested; the idea of the lone researcher working in his basement lab, the sole author of his ideas, is a quaint notion, however popular it may be.

Even broader than a theory is an overarching concept called a paradigm, often composed of many theories. M-theory in cosmology might be a good example of a paradigm, as a candidate for a “theory of everything” composed of many subsets that individually describe and explain some aspect of reality.

The term paradigm was coined or popularized by philosopher of science Thomas Kuhn and he used it in an early attempt to describe the internal process by which science changes, though it is now more often used in the sciences to refer to a conceptual tool, as a mode of thinking or general working approach to theories and frameworks of theories.

Good theories are never supported by only one piece of evidence, but through multiple, often thousands, millions, or more independent lines of data spread throughout many fields of research, which is why the demand science denialists make of, “show me just one piece of evidence that proves the theory true,” is nothing more than an empty rhetorical stunt, and an illegitimate shifting of the burden of proof.

The burden of proof rests with those making unsupported claims and asserting questionable facts, not advocates of well established and previously demonstrated findings.

Ultimately, no theory can be proven to be timelessly, absolutely true by finite data.

That’s because it sometimes only takes one reliable and properly documented observation to falsify one or more hypotheses of that theory.

In science, there are no absolute truths, so sometimes, that one reliable observation is all it takes to bring down a previously, but erroneously accepted idea, or to subsume it into an overall new theory with a more narrow but still valid domain of application.

That’s why such ideas as phlogiston, the luminiferous ether, and phrenology are no longer accepted as viable theories in science.

And with the junking of ideas that don’t work, and science’s ability to correct its course to an ever more accurate view of the world, who needs absolute truth?

(Last Update: 2019/2/26, 13:50)