Times are changing. We are not.

I saw the subject line of this post on a billboard north of Houston some time ago, advertising a local church, and I’ve continued to think about it since, in terms of the differences between the labels “conservative” and “liberal.” These terms are incredibly loaded, of course. But I want to discuss them first before remarking on the sign further.

I tend to think of “conservative” as a philosophy that rejects change for change’s sake, a sort of “if it ain’t broke, don’t fix it” variant. Certain ideals, government structures, cultural values, etc. are seen as needing preservation in their present state. This philosophy is not opposed to ANY change – just change seen as unnecessary, destructive, or rushed. Change is recognized as inevitable, but in need of careful management and review. This philosophy naturally leads to a brand of individualism that ironically undercuts itself; you are free to be an individual as long as you toe the value line.

Likewise, I tend to think of “liberal” as a philosophy that sees certain kinds of change as being artificially slowed by conservative stances when they need to be sped up, even in the face of majority rule that disagrees. It’s not a ‘change for change’s sake’ philosophy, either, but change is viewed as less of a threat and more of an opportunity. This philosophy naturally leads to egalitarianism, which favors the whole over the individual.

I should note that neither of these philosophies is inherently democratic (and both are vulnerable to charges of utilitarianism.) Democracy is yet another philosophy that insists issues should be resolved through some kind of majority vote that is determined through representatives of the people, whether by election, lot, or some other method. Democracy, in other words, is built to subsume conservatism and liberalism.

Back to the sign – “Times are changing. We are not.” This is beyond conservatism, which recognizes that change is ok if controlled, and into the realm of fundamentalism. As Karen Armstrong and others have noted, fundamentalism is an inherently modern philosophy; it does not try to preserve the past as much as demand a return to a past that may or may not have actually existed. As such fundamentalism requires the presence or perception of liberal-style change or it has no casus belli. Fundamentalism sees itself as an island preserve trying to hold the line against a jungle of chaos. It professes, even, a special, timeless, immunity to change. Change may even be characterized as cyclical and passing, something to be weathered until a future time.

The reason I like the sign so much is that it’s so vague and yet simultaneously quite specific. It’s a church advertisement, complete with a black and white photo of a white-haired man I assume is a leader in the church; therefore, the change referenced is almost certainly religious/theological in nature, even though it is not identified. As such, it is ALL religious/theological change of any kind. Furthermore, the verb in the second sentence, ‘are not,’ is also curious. It is not ‘will not’ or ‘can not’ or ‘have not’ or even ‘are not changing’ or ‘are not going to’ – it is the non-specific present ‘are not’. The verb in the first sentence is progressive, describing a process still occurring; if ‘changing’ is simply reduced from the second sentence, the sentence has the irony of having a progressive verb describe the absence of change.

Someone decided – a minister, a group of deacons, a church support organization, etc – to put up this sign and pay for it. Its message is therefore not trivial to them. And yet it is independent of history. Assuming it’s a Protestant organization of some sort, by definition the congregation’s values stem from the Reformation, which certainly qualifies as a pretty big religious change. So the sign’s claims must be more historically short-term, namely “Times are changing. We are currently advocating a religious worldview dating to year X that holds Y and has no current plans to deviate from its beliefs, unlike everyone else.” The colorful history of Christianity does not allow the sign’s claim; churches can seem to be bastions of non-change, but the plethora of versions of the religion that have exploded in the last few hundred years undercuts this claim. Religion is not immune to changes, whether temporary waystations want it to change or not.

However, the appeal of the sign remains to those who want to believe change can and should be held off, which is where fundamentalism and conservatism start to blend together. I could critique liberalism in the same way, as it can be pushed into a ‘change for change’s sake’ mode that is equally illogical.

Essay on ethics and agnosticism

I found this short essay of mine, written a year or two ago, while rummaging through some old files. I’d forgotten about it entirely. I may have abandoned it because it was getting too close to Kant ‘s categorical imperative. I don’t have time to revise it properly, but it’s sat in the proverbial trunk long enough.

Agnostic Morality

One of the most important questions that every human being must address in his or her lifetime is the question of morality. In other words, how do we know what is good and what is bad?

There are many positions on morality, of course, held by various religions, cultures, families, and individuals all over the world, but which one is the most advisable? And should we even try to pick, granting the assumption that some positions are better than others – or that any given one is better than most – but instead create our own? There are several common ways of resolving the question of morality, and I’d like to discuss these before arguing for an alternative that I believe is the most attractive.

The first method of resolving the question of morality is a popular one: subscription to a religion with a fixed moral code. The question of morality is resolved with an appeal to a higher power or powers that professes absolute moral standards and condemns, or forgives, those who disobey. Even a religion that professes a more “gray” moral code is functionally the same in terms of responsibility; either way, reliance on religion for moral authority shifts the onus for decision-making to a deity/deities or scripture instead of the individual or even a community. Adherence, of course, to this divine code may be spotty, but it at least serves as a guide for all but the worst hypocrites.

A second method of resolution is a stance of moral relativism. Relativism, simply put, holds that all meaning, including morality, is relative, man-made, and governed chiefly by context, culture, and language. The extreme case of cannibalism is a good illustration.  In some places cannibalism is morally acceptable, and in others, it is not; regardless, according to relativism, no locale or group can claim ultimate moral authority over the action. There is no absolute “good” or “bad,” only individuals and communities that agree on what is acceptable, and may very well clash if groups with different positions come into extended contact. It is not just man that is the measure of all things, as Gorgias put it, but acculturated humans in combination and dialogue with other acculturated humans that are the measure of all things. I should note that moral relativism should never be confused with nihilism, which discards morality entirely; relativism recognizes that the meaning agreed upon by communities, in particular, has value and authority within that community.

A third method of resolving the question of morality is the nihilism that I just mentioned, which takes relativism further, and too far for most. The nihilist throws out all moral standards, recognizing the authority of none, and claims that anything is allowable in a world where meaning is cheap. It is the individual, therefore, that enforces their morality onto the world. It is important to note that this may result in a philosophy of total unashamed selfishness, or a pattern of behavior undistinguishable from a saint’s. It is the refuge of both the insane and the fiercely independent.

A fourth method of resolution I will call, for the lack of a better term, physiological empiricism. It is similar to moral relativism, but its beliefs are grounded less in an understanding of the complicated and messy role of language and more in empirical observation. The physiological empiricist holds that morality is not quite entirely relative because it does have a traceable wellspring – the human body. Human morality, therefore, is an evolutionary byproduct of sentient beings that have two sexes, two arms, two legs, the limited perception offered by five senses, an omnivore’s diet, and a cognitively advanced brain and accompanying nervous system. In other words, evolution, a process following logically from the known processes of physics and chemistry, and game theory to explain how genes best each other in the reproductive cycle, produced the notion of morality. This position can be further extended by noting that religion and culture, as well as the other possible solutions to morality, are human inventions designed to allay nettlesome complications arising from the evolution of the body, including the very notion of morality itself.

There are serious problems with all four of these solutions.

For the first solution, that of religion, the sheer number of different religions, sects, and cults on this planet makes a choice among them a game of Russian roulette with a revolver that may or may not have a bullet in any chamber. You may be lucky enough to have been born in the “correct” religion, or not; in any case there is no way of determining validity besides faith. Untold numbers of people have no problem with this, but I do.

The second solution, moral relativism, has an uncanny ability to describe human interaction. On the other hand, it lacks the moral core that many people crave and runs into a significant problem when communities with different standards interact; it provides no method of resolution beyond, perhaps, a system of civic rhetoric (which I’ll get to later). It can also inconsistent with itself, by declaring an universal rule that there aren’t any universal rules.

Likewise, the third solution, nihilism, is hardly a balm for a feeling of spiritual emptiness, and the nihilist has no need of internal or external community resolution; it is generally not a philosophy for people who wish to live with and respect others.

The fourth solution, that of the physiological empiricist, too, can offer little solace or practical day-to-day resolution of conflict to the moral questioner, as it posits human beings are little more than fantastically complex machines evolved to promote the distribution and replication of genes. The situation it posits may be quite accurate, but it almost useless for resolving the question of morality.

Thus the question remains for the philosophically inclined. We are torn between a strong desire for certain, or at least reasonably certain, moral knowledge that can guide our actions and the actions of communities, and a recognition that the world is filled with innumerable moral claims that conflict, whether or not those claims are declared by deities or formed by humans through consensus. No obvious solution exists beyond a crude balancing of the urge for the divine and our flawed observations of the world.

Most people are not ethical philosophers, and yet most people, I would suggest, come to similar compromises, allowing as much faith-based spirituality or hard-nosed secularism into their lives as they can deal with at any given moment. Few of us have the luxury of being morally consistent in a world that constantly presents moral dilemmas that both challenge black-white morality as well as the gray alternatives. Ethics are expensive.

However, I would argue that the agnostic individual occupies a unique position in regards to the question of morality. Agnostics are skeptics, waiting for more information, and in that sense they are empirical and open-minded, natural candidates for the fourth solution of physiological empiricism. But at the same time, a consistent agnostic does not dismiss the possibility of some form of religion being an appropriate path.

Agnostics are not atheists, who by definition must hold onto a relatively firm position on a deity or deities; the agnostic reserves judgment and thus floats, unattached, between many viewpoints. This reservation of judgment comes from an interesting notion of responsibility to truths that may or may not exist, but at the very least are currently inaccessible. This agnostic conception of responsibility to the truth, I suggest, offers a possible fifth option for resolution of the question of morality.

The stereotypical center of Western morality is some form of the Golden or Platinum Rule, stated one way in Matthew 7:12: “In everything, do to others what you would have them do to you.” The ethic of reciprocity or fair play contained in this statement can be taken to be of divine origin, but it can also be construed in a relativistic or even physiological sense also; it is a rule derived from human behavior and the demands of living in communities filled with humans who are predisposed to react violently when their self-interests are not taken into account by others. It can also, however, be interpreted in terms of responsibility, in that those agreeing to the rule agree to take on a certain responsibility for their behavior, as well as for the welfare of others.

This maxim falls apart, of course, in many specific cases, as it has no notion of truth or how to manage what Greek stasis theory calls krinomenon, the point of judgment; the details of judicial enforcement of the rule in other words, are left to the reader. But the maxim becomes significantly stronger when its conception of responsibility is stressed. Each individual acceding to the rule has not only a right to be treated like all others given similar circumstances, but a responsibility as an agent to personally practice a policy of equal treatment. All actions in the Golden Rule’s bare-bones moral system are owned by their agent; there is no collective responsibility. Its moral system fails or succeeds based solely on individual moral decisions. In that sense, it is directly analogous to the success or failure of individual organisms in evolution; successful moral systems participate in an arena filled with many other reproductive strategies, and are replicated, marginalized, or rendered extinct through their direct contests with alternative strategies.

The Golden Rule might then seem to be a good choice for a moral core. The problem, however, is that it does not directly address altruism, another notion of responsibility that many are fiercely unwilling to abandon. There is no role in the rule for altruistic actions, only a careful measurement of the predicted response to the agent’s actions. Everyone must be treated fairly, but only according to a social cost. There is no moral charge to care for the poor, the sick, or the hungry; there is no responsibility not to kill if the agent is willing to be killed in turn (the Golden Rule technically allows wars without cause belli and even suicide bombing as long as the agent doesn’t mind reciprocation, which they hardly could in the case of suicide). In practice, then, as the agent is allowed to construct a moral system that pleases his or her standards and then willy-nilly inflict it on others, the Golden Rule (if isolated) can be attacked on grounds that it is fascist – or even nihilist.

How then, can the strengths of Golden Rule be combined with altruism in other to cover this loophole? I would suggest agnosticism is a pretty good answer. Agnosticism has a unique role of responsibility to the question of the existence/non-existence of God in that all options must be considered and all the evidence must be weighed before anything is resolved. However, agnostics must be moral agents in the world despite their doubts; pragmatic day-to-day decisions must still be made. These decisions, however, are made on the same moral basis of responsibility to the truth as the question of God’s existence, only quicker, through the means of a civic rhetoric that allows probabilistic decisions about questions that cannot be resolved unequivocally.

And thus, I would argue, agnosticism merges with altruism; the agnostic quest to find evidence for the existence/non-existence of God is best promoted through the successful spread of knowledge and ideas, which can only happen in a peaceful, productive society. In other words, agnosticism injects a peaceful morality into the Golden Rule through the concept of responsibility to the larger community. Indeed, there is no point to withholding judgment and operating on probability if there is no environment in which to further seek the truth. Therefore, the agnostic has a charge to create, maintain, and defend communities in which knowledge and ideas are celebrated and discussed. The disadvantaged, which under the more primitive and fascist Golden Rule could be disregarded, cannot be ignored; any and all of them may be part of the grand puzzle of existence, and the agnostic must, if he or she is to be logically consistent, acknowledge their potential to contribute to humanity and treat them accordingly.

Of course, this solution does not address the need for a divine presence, and a reasonable counterargument could be made that most religious people inject plenty of altruism into the Golden Rule all the time. What’s so different about agnostics? The difference is that in the place of a god or gods, agnosticism promotes a concern for the community and humanity as a whole. The agnostic fears not a supernatural power, nor the community itself, but a personal dereliction of his or her responsibilities to the community and the quest to answer the existence/non-existence of God. The purpose of the agnostic’s life, therefore, is adherence to the responsibilities of moral agency that he or she practices as a member of a community, in the sense that increasing knowledge and ideas are promoted by supporting the efforts of all the members of the community. Currently, I think this is the best way to promote a resolution of moral questions, if not the existence or non-existence of God. Your mileage may vary.

Affect

I don’t like to comment on moral matters too much, especially in this blog. So let’s put aside for the moment that I continue to think Michael Vick  is a deplorable human being regardless of how many touchdowns he throws. My opinion is irrelevant, of course. The amount of touchdowns, though, that he throws is NOT irrelevant; each one shifts a section of public opinion toward ‘forgiveness’, or whatever you prefer to call it, because each one is a highly public act and is charged with all sorts of positive meanings. How can a touchdown be bad? How can a superior athletic performance be bad? The implicit connection between virtue and athletic prowess is one of the most ironclad American (and not just American, mind you) values in existence. Even knowing about it and thinking about it doesn’t diminish its power that much. Even now, the world waits with bated breath for Tiger Woods to redeem himself with a major.

This failure to diminish worries me. One of the central assumptions of rhetorical study is that awareness of rhetoric constitutes a defense to it; that is to say, if we are aware that rhetoric is being employed, it will not affect us as strongly as it will affect someone who is not aware a series of strategies and tactics are acting upon them. And yet even as I am more aware of the pull, the  pull is just as strong. I can feel a little tug on the value of redemption and the redeeming characteristics of the sublime in sports. It is a persistent little tug, that one.

My working explanation for this centers on narrative. I like stories that end in a certain way. Vick’s story has several appropriate endings for me, usually involving a cold, dank cell. Woods’ story also has several preferred endings, most of which involve him winning 30 majors. This is probably directly linked to my view of Woods’ sexual shenanigans as sexual shenanigans rather than moral outrages,  and Vick’s dog-torturing as an heinous and unforgivable crime  rather than the cost of doing business. My values demand certain narrative outcomes and if those values are strong enough, they can cut through any rhetoric. When they are malleable or uncertain, rhetoric can get a foot in the door.

So this is a crossroads of values: in regards to Vick, my intolerance for any cruelty toward dogs, which I consider a morally superior form of life to humanity (I should write an essay one day on that) agrees with my minor hostility toward the cultist behavior of sports fandom. In regards to Woods, however, my disinterest in the  sexual escapades of public figures somehow overshadows that minor hostility and negates it. This may be because I never saw Woods as a role model – I think I’ve said before that I was convinced for several years that he was some kind of experimental  golf-playing robot that had escaped a secret government facility and gone on to win the Masters. I don’t admire the guy… but at the same time I have to admit I’d like for him to win, and this feeling – this emotion – is remarkably similar to my desire to see Vick in jail for life. Strange.

What is Rhetorical Criticism, Anyway?

I get asked this question a lot, and as it pertains to some manuscript revisions I’m making, I thought I’d take a informal stab here first.

I need to revise the question a little, though, and change it to “What makes a good rhetorical critic?” or even, “What do I think, personally, makes a good rhetorical critic?” Just talking about the criticism itself as some objective, free-floating entity seems a bit of a cop-out to me – reasons forthcoming shortly.

A good rhetorical critic starts with several bedrock epistemological assumptions.  Ignore or sidestep them at your peril.

The first assumption  is that all meaning worth talking about is an artifact of human perception, and thus limited by the boundaries of our particular physiology, evolutionary processes, personal experiences, sociocultural forces, etc, etc. Meaning outside of human perception is not worth talking about because, quite  honestly – and quite ironically – we can’t talk about it in any meaningful way. We can, however, analyze our perceptions and the perceptions of others to our heart’s content.

The second assumption builds directly upon the first. If all we have is human perception to play with, and our perception is limited, flawed, and problematic as Hume astutely put it,  then the grand bulk of human communication will necessarily have to be a series of arguments about the nature of the world. We will constantly be trying to communicate our perceptions – or at least what we want others to think are our perceptions – to others, who, limited by their own perceptional filters, will try to communicate back to us, and will be forced to deal with exactly the same problem in reverse. Imagine the human race as a giant room filled with brains in vats, who can do little more than send each other a constant barrage of garbled text messages  and then argue over the contents of these messages using precisely the same medium. The simplified medium in this metaphor stands for the whole human sensory suite – sight, smell, touch, taste, hearing, vesticular system, etc. In other words, all communication, all our efforts to communicate in this perpetually confused state, is rhetorical and epistemic by nature. As such, rhetoric is a kind of applied philosophy and vice versa.

The third assumption builds on the second. The observation that all communication is rhetorical and epistemic is not terribly useful by itself.  Our order-seeking, category-hungry brains prefer simpler fare in order to avoid overload, confusion, and general insanity. And so we are drawn inexorably to classify the communication that we use and encounter by genre, by tone, by purpose, by anything, really – the taxonomic urge, the pleasure of stereotyping, is quite powerful. With this comes the realization that while all communication might be rhetorical, some of it seems really, really rhetorical, whereas other texts are far less so. This is a byproduct of our preference for simpler fare; effective rhetoric is almost always hidden in some way, for if it is noticeable, then it becomes suspicious and challenges our worldview. Remember, we don’t want to fully acknowledge the extent of how all communication is rhetorical and epistemic – it’s just not possible to live with such a fundamentally bleak assumption second-by-second. So we simplify. Naked persuasion becomes undesirable as it exposes the constantly refreshed epistemological white lie that allows us to get through a simple conversation without going nutters. The good rhetorical critic, therefore, knows that much of what seems at first to be bereft of persuasion will turn out, with close attention,  to be rhetorical, though there is no telling in many cases until some careful reading of the text in question is performed. But there is a necessary limit to this where insanity lurks and we start theorizing about the rhetoric of bowling. Good rhetorical critics lurk near the edge, but they don’t go over.

The fourth assumption might be the most important: rhetorical critics cannot escape from this strange communication system with anything like objectivity. The good rhetorical critic knows he or she is embedded and complicit in whatever medium and text that he or she chooses to study. There is no magical  scholarly impartiality; those who pursue it like the Holy Grail, interestingly enough, tend to end up the most compromised, trapped within their own methodology. This corruption is everywhere, and everyone knows about it; a good rhetorical critic, however, embraces it like an old friend, shines a light on it, and reminds everyone about it, all the while noting and admitting their own complicity. This is why talking about ‘rhetorical criticism’ absent of its agent feels a little dishonest to me; there are as many flavors of this activity as there are practitioners. The term is useful shorthand, but it has limits.

The fifth assumption is a bit more mundane than the rest; this is where “methodology” finally creeps in (you might have been wondering when it was going to make an appearance). Holding the previous assumptions, the good rhetorical critic realizes that genre and its ilk, playing off of the brain’s propensity for order in an inherently chaotic world, are the key to understanding how texts persuade. The reason for this is that it is impossible to do good rhetorical criticism without knowing what kind of text you are examining. If the initial classification is poor, then the resulting analysis is near useless. This means, fortunately or fortunately, that rhetorical criticism is an art, not a science; that initial classification is made more by gut instinct and experience than by evidence, especially if evidence is hard to come by. Furthermore, that initial classification cannot be fixed in stone. It has to have some serious give. If you kick it, it should shift an appreciative amount.  Otherwise, all your analysis can ever do is prove your initial assumption and you are reduced to pronouncements, not arguments, when you choose to tell others about texts. The sciences know this, generally, but not always the arts.

The sixth and last assumption is more obviously a special topic or a value than the others:  namely, a good rhetorical critic thinks rhetorical criticism is worth doing, much like Ebert thinks talking about films does wonders for humanity. Calling attention to how the previous assumptions apply to certain texts – namely, that persuasion is going on – is a good idea. And it’s a good idea because rhetoric tends to be hidden, misunderstood, and used for nefarious purposes as much as for good ones; understanding how it is used, how it works, and what the ethical dimensions are contributes to the general human enterprise. It also makes it far easier to teach speaking and writing  if the teacher knows how to deal with rhetoric on an abstract level that is not wedded to any specific genre or context. And it’s certainly a good idea to promote more effective communication between human beings.

So that’s it, really: all meaning is limited by human perception,  all communication is rhetorical and epistemic, some texts are more rhetorical than others and rhetoric tends to be hidden for effectiveness as well as general sanity, subjectivity needs interrogation, genre identification is key, and examinations of rhetorical texts promote better understanding of human communication. That’s rhetorical criticism in a nutshell. I suppose I could go on to talk about specific things to look for in texts, reading strategies, terminology, etc, but these assumptions, at least to me, are far, far more important.

Pragma-Dialectics and Wrenches

While in the midst of boning up on argumentation theory, I recently read Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion Rules by van Eemeren, Garssen, and Meuffels, a welcome emperical investigation of pragma-dialectical theory that contains, among other things, a restatement of the commandments of the pragma-dialectical method.

In short, P-D theory is a set of rules, or, rather, a machine or heuristic, for detecting fallacies, which are defined as errors or mistakes in argumentation under P-D. The theory could also be viewed as a form of ideal argument or dialectic to be aspired to. It has all the obvious connections to speech-act theory. But it has its problems, and  I was reminded of them while reading.

I have always been struck by how Commandment 4, “Standpoints may not be defended by non-argumentation or argumentation that is not relevant to the standpoint,” is hopelessly, hopelessly idealistic, even by the ideal standards of pragma-dialectic, and furthermore betrays an unnecessarily narrow and non-epistemic conception of rhetoric.

Behind a lot of P-D’s commandments is the questionable assumption that anything resembling a wrench in the gears of an argument is bad. I have found rhetoricians in general to be rather comfortable with the idea of such wrenches, as well as their continuing and often random presence, as they are understood to be necessary accidents in the long, messy process of making knowledge; rhetoric is epistemic. And this is a good thing, because without it we would be hidebound to syllogistic logic and unable to decide or accomplish almost anything. It would be impossible to do even the simplest of tasks – say, brushing my teeth – without the option of arbitrarily choosing from competing options for my time that have no obvious answer. A scene from Tom Clancy’s Red Storm Rising comes to mind, where the anti-missile system on a carrier fails to fire at two incoming missiles because it cannot decide which one to target first; the carrier is hit by both missiles.

This is why I enjoy walking through the P-D rules with such minor, yet non-trivial, questions as “Is this the best time for Mike to brush his teeth?” or “Should Mike walk his dog in the next 15 minutes?” or “Should we buy the 12 oz or 18 oz box of Cheerios?” It’s very hard not to break, say, Commandment 2, “Discussants who advance a standpoint may not refuse to defend this standpoint when requested to do so,” almost immediately, because the standard defense to most reasonable positions on these pressing issues is, “Well, I think this is about right, so…” C4 falls, also; C7 follows quickly, as does C8, C9, and C10 like dominoes. The qualitative guesswork of daily life just doesn’t cut it in this system.

That said, I’m a big fan of C1: “Discussants may not prevent each other from advancing standpoints or from calling standpoints into question.” It’s not like anyone actually follows this rule with any consistency, but it’s pretty to think so.

Then again, I don’t like C2: “Discussants who advance a standpoint may not refuse to defend this standpoint when requested to do so,” because it is impractical to assume the burden of proof for all statements or arguments one might make; this leads directly to one of the more diabolical debating maneuvers, that of demanding that your opponent explain every single claim they make and calling them out when they fail to do so. Unless immediately pointed out and countered, the result is usually a waste of time for all involved. In other words, C2 can be a nasty weapon that avoids, rather than promotes, productive dialogue, one of the key points of P-D.

However, let me reserve my deepest concerns for C10: “Inconclusive defenses of standpoints may not lead to maintaining these standpoints and conclusive defense of standpoints may not lead to maintaining expressions of doubt concerning these standpoints.” Well, the second part is ok, I suppose, but that first clause is a doozy. I can’t hold a position that I can’t conclusively defend? That throws out every religion in existence. It also keeps me from brushing my teeth at midnight. The authors do allow a “zero standpoint” of “pure skepticism” (194 – why am I suddenly citing pages? I never cite pages here) but only after a long set of chapters where it escaped mention. My agnostic brain likes that concept, but why can’t I lean in one direction or the other without some sort of syllogistic reasoning? It would seem to me that most important questions are under debate because the answers are non-obvious, and this  situation is brought into being via a lack of applicable evidence; the natural result of any debate, then, is very small shifts of opinion after the initial judgment, far too small to be described by merely three positions, “Yes,” “No,” and “Zero.” P-D’s empirical measurements of its rules on real people allow for a very fine range of opinions of the rules themselves, but once you start using the rules, they seem far more rigid on actual content and arguers.

These aren’t the shades of blue you’re looking for

Rereading Hume’s Enquiry has brought the so-called “missing shade of blue” problem to my attention again. I have never accepted that it is a problem, and while I was driving yesterday, I thought of a few ways to demonstrate this.

The problem is as follows. Hume’s theory of perception classifies all perceptions as either ideas or impressions. Impressions come from sense experience; ideas come from impressions. This theory holds as long as no ideas can be generated without the use of an impression. However, Hume lists an apparent exception: imagine a man who has lived his entire life having seen all the different shades of blue save one. If shown a palette of all the shades of blue that he is familiar with, placed in order, will he be able to detect the absence of a shade? The common-sense answer is yes – and yet Hume dismisses it as a minor if singular expection. Several camps exist on this issue – one holds it really is a exception, and another does not, but it’s not easy to reconcile either position with Hume’s line of argument.

I can think of several reasons that Hume was right to dismiss this objection, though he probably should not have been as mysteriously cavalier about the matter, especially given the rhetorical aims of the Enquiry.

Some of the following suppositions  match preexisting arguments.  I have placed them in order from weakest to strongest.

1.) The situation as given is impossible to replicate. Color is not made of separate shades, but rather a continuum. How can the man be sure he has not seen that shade before? Did a team of scientists keep him in a bubble for his entire life that was drained out that particular shade? They would have to make sure he had never seen a prism or a rainbow. It’s like saying the man has used numbers all his life without ever encountering 42. Could Hume’s man perceive a missing shade without the presentation of all the shades of blue? Probably not. The example is loaded – it assumes, in fact, that there is a missing shade, a problem I’ll address a bit later.

2.) If I accept the situation, the idea of the missing shade is still not independent of impression – it requires extensive knowledge of color, which is dependent on simple sense perception. One individual color on the entire spectrum does not constitute an idea independent of sense perception, especially if defined as a blend of two colors.  Furthermore, the mere notice of a gap in the sequence is built on a foundation of years of experience with color, and the concept of a gap itself is not necessary to mathematics that I know of. This argument is a little too ordinary-language philosophy for me, but it’s important nonetheless.

3.) The perception of a gap in a series, or in any pattern, does not require that gap to actually exist. This, I feel, was Hume’s plan all along – his coming evisceration of causality would render the blue-shade example moot.

Much of the argumentative strength of the blue-shade example comes from our knowledge that there IS a missing shade, but the man in the example does not have that certainty. He only suspects there is one – he cannot prove it, for he has no sensory experience of it. Rather, he can only suggest there is a very high probability there is a missing shade, much like I can only posit there is a very high probability that the Indian Ocean exists (I’ve never seen it) until I have seen it, and even then I may be misled, for our senses are rather untrustworthy.

In the 1985 film Goonies, the Mikey character finds the skeleton of a long-dead pirate called One-Eyed Willy, who is still wearing an eyepatch. Curious, Mikey pulls back the eyepatch… and there is no eye socket, only bone. The expectation was that the patch covered something absent, but Willy never had an eye to begin with. The anticipated missing eye is revealed as non-existent. This line of thinking leads rather quickly to  Schrodinger’s cat; it is not until the moment of sense perception that questions of existence or non-existence can be partially enlightened.

Imagine this scenario – Hume’s blue-deprived man perceives there is a missing shade, and goes looking for it… and never finds it. He experiments with dyes, travels the world, gives talks to breathless audiences, writes furious  monographs. He dies without ever seeing it or without any human being ever finding it. Some scholars, in fact, suggest his perceived “gap” in the color spectrum is actually a fundamental principle of the color spectrum, evidence of a limitation of the human eye, or a mere symptom of the man’s madness-tinged brilliance.

Hume’s notion of causality allows such a scenario, as it allows ALL scenarios. The mere notice of a possible missing shade demands nothing. Take John Couch Adams’s predictions of the existence of Neptune. What if they had come to nothing? Newton’s laws would have to be reexamined.  What if, rather, the measurements made by Bouvard had been incorrect, and there were no discrepancies in the data upon remeasurement? The perception of a gap or discrepancy in a pattern is a sense perception that requires absolutely nothing to follow it. This of course does not require that nothing does – only that deductive logic is useless for such questions.

But, you might, ask, can’t Hume’s colorist have an IDEA of a missing shade that is independent of sense perception without it having to exist? Well, no. I once worked in a eyeglass lab with a color-blind fellow, who oddly enough was very good at color dying lenses; he went solely by darkness of tint and the labels on the dye vats. He knew there was an entire world of color that he did not have access to, and I’m positive he thought about what it might be like on many occasions, but he had nothing save dark/light patterns – anyone who has experienced color knows it is far more than that – and word labels to go on. They could give him an ‘idea’ of what he could not experience, but he has no independent way of confirming if his ‘idea’ matches up, save the unreliable testimony of six billion people or so. His ‘idea’ cannot approach a color-sighted person’s ‘idea’ of that shade (which is itself imperfect in proportion to experience with that shade); it is at best an approximation made up of similar sensory perceptions that he does have access to.