Sunday, June 26, 2011

The Arrested Development of Political Science

In academia today there are two branches of political science: the quantitative and the philosophical. The former approaches political science as an extension of sociology and economics, and spends its time analyzing polling data and other modern, objective aspects of politics. The latter approaches its discipline as an extension of philosophy and history, and spends its time reading and discussing the “Great Books of the Western World,” and classical literature in general.

In college I became very familiar with the latter camp. I took many classes with them, and spoke independently with their professors and their best students, many of whom are now in PhD programs or in DC. I had many great conversations with them, but they left me unsatisfied. I couldn't shake the feeling that their work was a collection of post hoc rationalizations of historical events and over-reverant over-readings of ancient texts. It not only lacked rigor: it seemed blissfully unaware of the concept of critical self-analysis. They presumed that every classical work had something deep and important to say, if only metaphorically, and they accepted the authors points uncritically.

They didn't read Machiavelli to analyze his arguments and decide if he was right or wrong; they read him like a monk reading the Bible: devotionally, to mediate on the true and deep meaning of the text they were certain was there. When challenged they had a cult-like instinct to defend the text, which often boiled down to “you'll understand when you're older.” When students did things like point out that Plato's response to Thrasymachus doesn't really answer the spirit of the objection (which is that there is no justice, only strength), and instead only refutes a peculiar interpretation of Thrasymachus' argument (which is that citizens have a duty to seek the interests of their government), they were met with sarcastic comments about how smart they must be to refute Plato.

Although they taught and emphasized reading comprehension, they seemed entirely innocent of all epistemology. They valued open-ended discussions about vague terms like virtue and manliness, but had little patience for truly critical questions (“Plato's argument for reincarnation is circular, as it assumes that life and death are analogous to consciousness and sleep”). They argued by speaking so vaguely and incoherently that students just assumed they were missing something, rather than that their professors had nothing substantial to say. When their arguments were clear they were usually ad hominem fallacies. Those who saw the vapidity of Nicomachean Ethics were said to lack sophistication or thumos (manliness).

To be fair, the actual ideas of these political scientists were not so much incoherent as they were just entirely speculative and lacking in any kind of rigor or proof. Like the ancient authors they idolized, they wrote clear, articulate manifestos full of baseless assumptions that they never defended or even acknowledged the need to defend. To use Aristotle as an example (for he is nearly as popular as Machiavelli among them), his Nicomachean Ethics assumes without argument (or, at best, with glib appeals to authority) that all actions aim at some end outside themselves, and that there is one, or one group of, ultimate end(s). Even if we forgive him for assuming that all actions aim at some external end, rather than being ends in themselves, his argument for one final goal for all actions is a simple appeal to vanity (“otherwise our desires would be empty and vain”).

He commits other crimes against reason, such as arguing that virtue is not a natural inclination in man, because natural forces cannot be trained: you can't teach a rock to fly. Note that this is the very example he gives as an argument. This is both a weak inductive argument and an equivocation on the meaning of nature, or at least an attempt to smuggle in external metaphysical assumptions. The most grievous error in Nicomachean Ethics is in 1.7, when Aristotle attempts to define the function of man. He wants to discover the “proper life” for a man, and thus determine a basis, or ultimate end, for ethics, but he dismisses certain kinds of lifestyles simply because they resemble non-human things.

For example, he says that a life of simple sustenance and health cannot be the function of man, because a plant is capable of doing the same, and man's function must be peculiar to him. This is just a non sequitur. No argument at all is offered for this assertion, and he dismisses the most obvious function of human life because plants are also capable of it. This is the sort of argument that is very popular in political science: the dressed up non sequitur so lofty in its proclamations that few dare risk the appearance of unsophistication required to call it what it is. It's the Jackson Pollock of arguments.

The entire discipline of non-quantitative political science is stuck in the college freshman stage of intellectual development, waxing eloquent in bull sessions, reaching conclusions far outside the scope of their arguments, and blissfully unaware of the distinction between internal and external validity. They attempt to draw grandiose conclusions about modern politics on the basis of historical events, often on the most superficial (if not outright imaginary) of similarities. I've heard them discuss the divergent outcomes of the French and American Revolutions as though they had never once heard the term “confounding factor.” It's bad enough to make glib comparisons between American military interventions and ancient Roman conquests, but half of their discussions were based on their own imaginations: the character of Athens versus the spirit of the Peloponnesian League, the virtues of Lincoln against the principles of the South, and so on.

I found their pontificating all the more amusing when it was combined with their characteristic delusions of grandeur. They stood atop all of history, the preservers and defenders of western civilization, students and teachers of human nature and virtue, the true learned men of the polis. Science, mathematics, and even social science done with rigor where all beneath them. The modern world had forsook the true wisdom of political science for the mere practical facts of science. As Harvey Mansfield, the modern star of the field, said, “[Our] crucial work, which is necessary to science and, may I add, more difficult and more important than science, is hardly even addressed in our universities.”

To combine the criticisms of two of my fellow students, political science is “a secular priesthood of mediocre minds, attracting those students to it who are most unable to see through its sophistries,” and “its multifarious terms form the basis of a pseudo-philosophical discourse, as much dependent on its reified abstractions as the most absurd system of superstitions."

Monday, June 13, 2011

The Elegance of Mathematics

In high school and college I learned how to calculate, but I never learned anything about mathematics until recently. Manipulating symbols to get the same answer as the teacher and understanding a language built on logical proofs are two very different things. For example, I learned in grade school that you divide fractions by “reversing and multiplying,” but it wasn't until much later that I realized that this was because (a/b)/(c/d) = (ad/bc) by eliminating the complex fraction. Perhaps my math education was unusually deficient, or unusually “plug and chug” in its approach, but I can't help but feel that I was never really solving those fraction problems until I realized this, no matter how often my symbols matched the answer key's symbols.

Recently a particularly elegant and simple proof concerning irrational numbers came to my attention as exactly the sort of thing I wish I had learned in school. I was always under the impression that irrational numbers, like pi, or the square roots of 2 and 3, were irrational simply because no one had yet discovered a fraction that produced them as a quotient. This, it turns out, is false. There actually exists a simple, algebraic proof that the square root of two (hereafter sqrt(2)) cannot be rationalized.

1. If sqrt(2) were rational, then sqrt(2) = x / y
2. Square both sides: 2 = x^2 / y^2
3. Multiply both sides by the denominator: 2*y^2 = x^2
4. A number is even if it has 2 as one of its factors. Because x squared equals two times an unknown variable, x squared must be even (anything equal to twice another value is even, and we know y, being the denominator, is not zero).
5. If x squared is even, then x itself must also be even. When x is multiplied by x both terms must be either odd or even, and two odd terms cannot create an even product, so the fact that x squared is even proves x itself is even. This is intuitive if you understand that multiplying a number by itself will never add any new prime factors. If 9 is 3*3, then 9^2 is just 3*3*3*3.
6. If x is even, then x / 2 = z
7. Multiply both sides by the denominator: x = 2*z
8. Substitute (2*z) in place of x in the equation from line 3: 2*y^2 = (2*z)^2
9. Perform the square operation: 2*y^2 = 4*z^2
10. Divide both sides by 2: y^2 = 2*z^2
11. The above equation demonstrates that y must be even via the same reasoning found in lines 4 and 5. Note the parallelism of the equation in line 3 and the equation in line 10.
12. Therefore both x and y must be even.
13. If both the numerator and denominator of a fraction are even they can both be simplified until one of them is not even. It is impossible for both the numerator and denominator of a fraction to be even by necessity. You can always divide both numerator and denominator by 2 until one of them is odd. In a fraction like 2/8 the numerator is even only incidentally, and can easily be reduced and made odd. Only the denominator is even by necessity: it can be reduced but can never be made odd.
14. Therefore x/y does not exist, because there are no fractions that must be expressed as an even number over an even number, and sqrt(2) must be irrational.

It's amazing to me that something like this could be proven a priori: out of all the quotients in the world, none of them equal 2 when squared. We don't even have to test a single fraction to know this.

Monday, May 30, 2011

Thoughts on Harvey Mansfield

Harvey Mansfield is a political science professor at Harvard who is very influential among certain groups in his field. He wrote the definitive modern translation of Machiavelli's The Prince and is popular in some academic conservative circles. Most people who know of him know him as the one really conservative (and notoriously difficult) professor at Harvard.

I'm relatively familiar with his work, but I never found him compelling. He, like too many political scientists of his bent, writes in grandiose proclamations rather than arguments, and finds profundity in over-defining vague terms like justice and manliness. After I graduated from college I never expected to think of him again, but two articles he wrote (one recent, one older), came to my attention a few days ago, and I feel compelled to comment on them. The first was a short speech in acceptance of a Bradley prize (1), and the second was a book review from 2009 (2). I never had a huge amount of respect for Dr. Mansfield, but the intellectual caliber of these two articles took me off guard. If I read them without a byline I wouldn't have believed they were written by a Harvard undergrad.

Mansfield's remarks on accepting the Bradley prize begin with a history of Harvard as an institution, which leads to him remarking on other Boston schools, such as MIT, which in turn leads him into a discussion of science. He says that MIT is devoted to the advance of science without regard to whether or not this advance is good for humanity, and he bemoans the fact that “The failure to ask this question is characteristic of our universities today.” Of course, this question is actually asked all over the country, ever fall, in freshman philosophy classes and dorm room bull sessions everywhere. I find it curious that this doesn't satisfy Dr. Mansfield, as I know of no other way the question could be discussed, as it belongs to that peculiar category of questions that are fun to ask but impossible to answer.

Mansfield goes on, “Scientists easily forget that science cannot prove science is good, that their whole project is founded upon what is at best unscientific common sense.” No one in the history of human thought has ever proved that anything is good, or answered the most difficult questions of epistemology, so all disciplines have yet to prove they are good, and all human thoughts are ultimately founded on a kind of common sense. The term “good” is too vague to prove anything with, and every school of thought has a different conception of it. Hume's arguments against induction, and the ancient skeptics arguments (such as the problem of the criterion, or the diallelus argument), have yet to be satisfactorily refuted to this day, but they are equal opportunity destroyers. Any attack against science made on their behalf applies just as readily to any other discipline.

Thus Mansfield's next point, that science's unscientific epistemological foundations leave it “far short of wisdom,” is disingenuous. If a failure to refute radical skeptical arguments leaves a discipline short of wisdom, then wisdom doesn't exist. Up to this point Mansfield has been vapid, but has avoided demonstrable error. He falls off that cliff when he says, “Science has no idea why human beings resist science at least as strongly as they embrace it.” This is like saying that modern geographers have no idea how far apart London and New York are. It's pure ignorance. Neuroscience and experimental psychologists have done an enormous amount of work on the way that the brain processes information, and the reasons people reject new information are well understood. There's always room for improvement, of course, but saying science “has no idea” is just wrong. It's not like neuroscientists are hiding this information; there was a recent popular press article about it (3).

The book review is even worse. I wouldn't expect a political science professor to write expertly on a book about evolutionary anthropology, but I would expect him to not make basic logical errors. He states that biological anthropologists, as opposed to cultural anthropologists, believe that biology supersedes culture in shaping human nature. This is wrong, but forgivable. The difference between a cultural anthropologist and a biological one is the aspect of anthropology that they study, not their personal belief as to which is more important in shaping “human nature.” What is not forgivable is this lunacy: “Like many scientists, Bribiescas [the author] lives under the yoke of a crude positivism which denies that scientific fact has any ethical implications. 'Darwinian evolutionary theory does not support any moral stance.' But of course it does. The trouble is not that Darwinian theory has no implications, but that it contradicts itself with two opposing implications.”

Could a Harvard professor in a humanities department be unaware of the is-ought distinction? Mansfield apparently is. Understanding that the statement “the car is red” is logically distinct from, and in no way implies, “the car ought to be red,” is a matter taught to freshman everywhere as a matter of elementary logic. Even if some additional premises bring about the conclusion “the car ought to be red,” this conclusion remains distinct from the original statement. As incredible as it seems, Mansfield actually seems to not realize this, as he goes on to assume that the “is” statements of evolutionary theory imply certain contradictory “ought” statements, leading him to the triumphant declaration that he has “brought out the inner contradiction of Darwinism.”

I honestly wonder how this review would be graded if someone handed it in as a freshman essay. The prose is clear enough, but any philosophy professor would drown the pages in red ink over this catastrophic failure of logic. This is, as the kids today say, “epic.” I would have bet hundreds of dollars that a Harvard professor could not be this stupid.

(Even the prescriptive statements that Mansfield draws from evolutionary theory are spurious. For example, he claims that evolution means that men ought to imprison their wives in harems to ensure their fidelity. He doesn't understand that this is completely impractical, and therefore evolution improvised a different strategy: be extremely promiscuous. If your wife might cheat on you, so might your neighbor's wife cheat on him. Dare I say that evolution has been extremely effective at drilling this into the male psyche?)




Saturday, May 14, 2011

Popular Innumeracy and Autism Statistics

Suppose you hear someone say that they are not going to vaccinate their children because they believe vaccines cause autism. You explain to them that all of the scientific literature demonstrates that vaccines do not increase a child's risk of developing autism. Population studies show no increase in autism among vaccinated populations compared to unvaccinated ones, and, further, there is no physiological mechanism for a vaccine to cause a disease like autism. Saying that vaccines cause autism is like saying that alligators cause earthquakes. It's not merely untrue: it's hard to imagine how it could possibly be true.

Unpersuaded, your friend tells you that his little brother started displaying autistic behavior the very same day he received the MMR vaccine. He was vaccinated that morning, and that evening they noticed strange behavior. What are the odds of that being due to chance? This experience has given him an unshakeable belief in the evils of vaccination.

The obvious rebuttal to this argument, that his family's memories are distorted, is probably correct, but is unlikely to be persuasive to him. Modern scientific psychology has demonstrated that our memories are far less reliable than we want to believe, but pointing this out to people rarely helps, as everyone believes they are the exception. Perhaps it would be better to take him up on his rhetorical question. What are the odds of a child developing symptoms of autism and being vaccinated on the same day?

The annual US birthrate is about 4 million, and the rate of autism spectrum disorders is estimated at 1 in 166. This means that there are about 24,000 children born with autism each year. In any given year the odds of a child displaying his or her first autistic symptoms on the same day they receive a vaccination is 1 in 365.

(This should be intuitive: the child will display autistic symptoms one day out of the year, and the odds of this random day falling on the same day as the doctor's appointment are equal to the number of days in the year, assuming both days are picked at random. Don't make the mistake of thinking that the odds are 1 in 133,000 (365*365). That would only be true if both events had to happen on a third, predetermined date. In this case the date of the first event (the doctor's appointment) is irrelevant, and the only question is whether the second event will happen on the same date. The odds of two coin flips yielding the same face of the coin are 50/50, not 1/4, because the first flip is irrelevant.)

24,000 divided by 365 is about 65, and this means that in a given year an average of 65 children will display new signs of autism on the very day they are vaccinated – by chance alone.

These are undoubtedly much higher odds than your friend anticipated, and, in fact, they reveal that situations like the one experiences by his brother are not only likely, but are statistically certain to happen to someone every year.

Our brains' intuitive statistical calculations, our “common sense,” our “what are the odds of that?” feelings, are not properly calibrated in modern society. We think in terms of a small town or small tribe, not in terms of hundreds of millions of people. We tend to think of an event with a probability of 1 in a thousand as being very unlikely, without realizing that, even in our little burg, we have over ten thousand people.

Long story short: this is why I think we should teach statistics in high school.

Monday, April 25, 2011

Aphorisms on Science (2)

Fungi aren't weird plants. They're weird animals.

Viruses are not merely alive: they are the most pure example of life on the planet.

People often think of wolves as great survivors, far superior to the weak and dumb lap dogs that old ladies pamper. Nothing could be further from the truth: the pug has adapted very well to a land overrun by humans, and the wolf has adapted poorly. The very fact that the wolf has been hunted to extinction in many parts of North America, where thousands of chihuahuas thrive, is proof in itself of the lap dogs superior survival skills. The fittest don't always look like what we expect.

The idea that cancer is caused by errors in the regulation of cell division is not exactly right. It is true that mutations in certain regulatory genes causes cancer due to increased cell growth and division, but the labeling of these changes as “errors” is misleadingly anthropocentric. Normally, if a cell lives in an environment that restricts its reproduction, and its genes mutate in such a way that the cell can now grow and divide freely, we call those mutations adaptations. Cancer, then, is caused by the individual cell's adaptations to a hostile environment.

[Yes, I know that fungi are distinct from both plants and animals. My point was that they are more closely related to animals than plants. Also, I may have unconsciously copied that statement about viruses from a Dawkins' lecture, but I can't find the exact quote anywhere.]

Wednesday, April 20, 2011

Thoughts on Evolution and Education

On vacation in a remote part of the world you come across a tribe of natives who see very few outsiders. They are interested in learning more about where you come from, but when you explain to them that you come from a large city, with millions of inhabitants and buildings with a hundred stories, they laugh at you. Nothing you say to them seems to convince them that so many people could ever exist, or that buildings could be built of more than a story or two. The more you say to them, the less they believe. Your stories about winter and snow falling from the sky are particularly amusing to this tropical people.

What can you say to these people to make them realize the truth? You want to educate them, and they would surely believe you if they had access to the same evidence that you do. How do you break the spell of their ignorance?

The answer, of course, is that you don't. There's absolutely nothing you can say, no abstract argument that you can present, that would convince people of the existence of physical phenomena so outside their realm of experience. In fact, it would almost be more problematic if they did believe you. How could you respect the intellect of someone who believes a stranger's crazy stories without any proof? This is an interesting observation: the natives disbelieve you, even though you speak the truth, and they are rational to do so.

Scientists, having been highly educated in one specific field, often have trouble convincing laymen of their theories, especially theories that are either counter-intuitive or religiously and politically inconvenient. The evidence for evolution, for example, is so far beyond the grasp of the average person that talking to them about it is like discussing derivatives with someone who can't multiply. To even begin to understand evolution one must first understand the basics of chemical reactivity, organic molecule structure, genetics, and cell biology. In some cases even understanding the basics is not enough, and can even cause more harm than good.

For example, many people think of DNA as a program, which is OK as a “training wheels” metaphor for beginners, but such metaphors often provide people with a false sense of understanding, and can lead them to incorrect conclusions. “Evolution can't produce new genetic information” is a common argument among those who have failed to grasp the limitations of the program metaphor. Someone who understands how DNA really works knows that this is absurd on its face: there is no “information” in DNA, only nucleic acids (ssDNA) that preferentially attract certain other nucleic acids (mRNA) which in turn preferentially attract certain other nucleic acids (tRNA) that have specific amino acids attached to them, which creates a long chain of amino acids. We call these amino acid chains “proteins,” and they do most of the functional work in the cell. The chemistry here is no more complex, in principle, than watching salt dissolve in water while rubber does not: the electronic properties of molecules cause them to interact with certain other molecules in specific ways.

New genetic “information” is often created when one gene is copied twice in replication, resulting in a new, redundant gene that can be mutated freely. Normally mutations in an important gene would hurt an organism, but with an extra copy of a gene random mutations can pile up until something useful appears. It usually only takes a few changes in amino acid sequence to change a protein's function dramatically, and evolutionary biologists have discovered that the majority of our bodies' proteins are clearly re-purposed from a handful of conserved core processes. Sickle cell anemia, along with the resistance to malaria it provides, is caused by a single amino acid substitution. The organism gains a completely new survival skill (malaria resistance) thanks to a single random mutation. This is like turning Moby Dick into The Great Gatsby by changing a single word.

I don't really see any solution to the problem of scientific illiteracy in America, in regard to evolution or to any other specific issue, except to educate people about the core ideas behind the theories, so that they can understand their plausibility. Some scientists point to the fossil record when talking to laymen about evolution, but seeing indirect evidence that one type of animal can change into another is less convincing than understanding the deeper truth: that all animals are colonies of eukaryotic cells, no different in concept than a colony of yeast on a petri dish. Eukaryotic cells form social colonies, and each member specializes in a certain type of task. Over time these social networks become permanent, forming cells that could not live independently, and therefore being part of a truly multicellular organism. The animals we see today are merely the result of radical cell specialization, but every single cell within them contains the unmistakable signs of its free-living ancestors. (Sometimes these ancestral traits for independence even come bubbling back to the surface. We call this “cancer.”)

The single biggest hole in the American layman's scientific understanding is the knowledge that everything in biology is a variation on a theme. At the cellular level a relative handful of conserved regulatory mechanisms and core processes support all life on earth, and at the macroscopic level all of the diversity of mammal life is the result of a series of nips and tucks to the original tetrapod body plan, which itself is merely a coelacanth body haphazardly optimized for life on dry land. It is difficult for any honest person to see the skeletons of bat wings, dolphin flippers, dog feet, and human hands without noticing that they are like four movies based on one novel. Even the organelles of the eukaryotic cell are merely inward foldings (invaginations) of the cell's plasma membrane that have divided the labor of the cell into distinct sections. The endoplasmic reticulums, both rough and smooth, are specialized to perform functions that the normal cell membrane performed in the ancestral organism.

The most essential functions of life are MacGyvered from whatever the cell found laying around in the environment. The mitochondria, the cell's energy factories that make life on a large scale possible, are just bacteria that formed a symbiotic relationship with a larger cell. The nucleus, despite its modern specialization, was originally just another cell membrane: the cell protected its DNA from its own internal environment the same way it protected itself from the external environment. Real innovation is nowhere to be found in biology.

The biologist's story of life, that self-replicating RNA molecules, which naturally compete with one another to become the most efficient replicator by virtue of their basic physical properties, began to inhabit naturally forming phospholipid bubbles, and then to augment their sequence storage with DNA and their metabolic activities with proteins until the RNA had reduced itself to an intermediary between them, all due to an arms race with other self-replicating RNA molecules, and that the cells formed by this process would go on to form the mutually beneficial social colonies that now comprise all organisms, that the entire blue whale is created because of, and in order to continue, the self-replicating chemical reactions of the nucleic acids in its cells, cannot be understood without an excellent grasp of all the underlying science. This science, however, is so difficult that every fall many students, who want to learn it, who study it full time, whose future careers will depend upon knowing it, and who represent the more intelligent members of society, fail to learn it adequately, much less completely.

What hope is there, then, for the average American layman? Can his skepticism, in the face of such ignorance, be forgiven? In the shadows of ignorance many of history's greatest minds have come to conclusions far less plausible than those of the average creationist.

Friday, March 11, 2011

The Problems of Knowledge

The single most pressing problem with all truth claims, whether they are moral, scientific, or personal in nature, is the inability of any system to validate itself. Any system of beliefs, formal or informal, may be perfectly internally consistent, and able to answer all positive arguments against it, but it cannot prove itself. This is because some of the questions at the core of epistemology (the study of how we know what we think we know) are unanswerable in principle.

All attempts to prove a given belief or set of beliefs to be correct, or at least superior to other beliefs, are vulnerable to two types of epistemological arguments: regression arguments and criterion arguments. For example, if a scientist were arguing for a particular theory, the skeptic, armed with a regression argument, might ask him why he supports that theory above others. When the scientist explained the available evidence and data, the skeptic would ask why evidence proves anything. If the scientist tried to explain why he thought it was reasonable to use physical evidence to understand the physical world, the skeptic would ask why reasonability should be any guide.

The criterion argument arrives at a similar conclusion a slightly different way. If confronted by a philosopher arguing for a particular ethical position, the criterion skeptic would ask the philosopher how he knew that his conclusions were correct. When the philosopher explained his principles and his thought process the skeptic would then ask why he believed in those particular principles, which had led him to his original conclusion. When the philosopher defended his principles, the skeptic would then ask him how he knew that these defenses made sense (in other words, what criterion was the philosopher using to validate the claims made in his defense). The cycle of how and why would continue indefinitely, because every truth claim requires a criterion to establish its validity and every criterion seems to require truth claims to establish itself. For example, the claim, “I went to the store yesterday” requires the “we can trust our memories” criterion to be true, but this criterion requires other specific truth claims to establish itself.

Eventually, a man confronted with the criterion argument is bound to suggest reason, or logic itself, as a criterion, which brings us back to the same place as the regression argument. Most arguments reduce to an appeal to reason: they tell us to believe in the truth of something on the basis of its reasonability. But how do we know that logic is a path to truth? This is the key unexamined assumption behind all claims. It is, obviously, an impossible question to answer, or even analyze, because all claims made in the course of discussing it are vulnerable to it, so progress is impossible. For example, one common response to this issue is to say that all claims, if we know nothing about them, have a fifty percent chance of being true, so reason has at least even odds of leading to truth.

Ignoring all of the other problems with this Bayesian line of thinking (such as the fact that we don't know this to be a binary situation, as there may be many supposed routes to truth), it should be clear that the counter-argument itself is open to the original criterion or regression argument: how do we know that unexamined claims have an even chance of being true? Why do we think this? And so on.

Some philosophers attempt to avoid this issue by saying that the question itself (“why do we trust reason to lead us to truth?”) is invalid because it relies on reason to make itself understood. In other words, the ideas of validity and fallacy, coming from reason itself, cannot be used to attack reason. They respond to the original question by saying, “We believe in reason because it is reasonable.” When confronted with the obvious circularity of this thinking they point out that the fallacy of circular reasoning is known to be fallacious only due to reason, and therefore cannot be employed by someone attacking reason. This is understandable in the case of those who are actually arguing for irrational ideas, and claiming that reason itself is invalid, but against the skeptic, who is merely asking the question, their response is nonsensical (not to mention an ad hominem).

Reason demands that ideas prove themselves, and that everything be rationally analyzed and tested, and it is this quality of reason, this refusal to accept arbitrary ideas, that demands an answer to the question of whether logic leads to truth. Reason is like a snake eating its own tail. If reason does not lead to truth, then truth is unknowable, as reason is the only tool we have, yet reason is unable to give an account of itself, or to validate its own ideas at the base level.

This is the problem: if we accept reason, we must also accept the question of whether reason can lead to truth, which seems unanswerable, and if we reject reason we are left with nothing but arbitrary beliefs. Yet, our acceptance of reason as a path to truth is itself arbitrary, given that we can produce no argument for it, and therefore everything we believe as a result of reason (which is essentially everything we believe) is arbitrary. And, if everything is arbitrary, no one belief can be said to be better than another.

Where does this leave ethics, philosophy, science, or any other academic subject that claims to seek truth, not to mention morality itself?