The Link Between Competence and Character

Tiger Woods’s recent “comeback kid” storyline and the ongoing accusations against Judge Kavanaugh remind me that America has an obsession with linking competence to character.

Americans understand competence in two ways. The first is as a minimum. Competent means you mean the minimum requirements for your job or role or sport. You can use it as a pejorative – “He’s just competent,” or as a compliment, “I think you’re competent,” signaling that we ourselves don’t quite know what to make of the concept.

The second way, which is much more insidious and worthy of analysis, is that competence signals good character; a competent person is a good person. When Woods was struggling on the links, it was far easier to link that struggle to personal failings of will, talent, or ethics. But when he’s winning, those concerns are forgotten and replaced by their opposites. He is “mentally tough” and “brilliant” and “disciplined” now, an object of celebration and adoration, a victim of his injuries rather than ruled by them, if he was still losing.

Kavanaugh, too, is a litmus test for how competence is viewed. On one hand, Republicans tend to point to his long career as evidence of competence, and this is extended, by the second definition, to his character. He could not possibly be an attempted rapist because he is competent professionally, the reasoning goes. On the other hand, Democrats reverse this – because he is competent in Republican eyes, they reason, his sterling resume is just the mask of a sexual offender. Either way, it’s a logical mistake. Kavanaugh’s competence as a judge does not cause better personal behavior, or the reverse, that an ethical life leads to competence.

Think over your life, of the many people you’ve known, and you’ll recognize many other examples. The selfless saint that can’t hold down a job, the crack businessman that made his fortune cheating customers, the immature star athlete, the idealistic employee passed over for promotion yet again. And yet we insist to ourselves that there must be a link between behavior and competence. There must be. But there isn’t.

All Trump voters in 2016 knew this very well, even though they might not admit such in public. Trump was rich and famous, with all the trappings of success, and a reputation, at least, of business acumen, but no one is seriously going to point to him as a paragon of moral character. And yet, even with his glaring, obvious example, this doesn’t change how we view Woods or Kavanaugh in the slightest.

Disengagement from this kind of thinking is difficult. Among the professoriate of which I am a member, the professors who publish often are seen as hardworking and industrious, and many sins are forgiven. The ones that don’t get as much in print are viewed as lazy, goldbricking deadwood. This happens despite the inherent randomness of the academic publishing process and despite all the other things professors do, like teaching and administrative work. We’re supposed to be the smart ones, but we can’t easily escape the fallacy either.

Curiously, when it comes time to fire someone, the two concepts of competence and character separate a little. Either can be used to fire you without recourse to the other, but there is always an implication that you failed in both areas. Many positions are apparently supposed to be better than the average Joe, character-wise, given employment clauses detailing the requirements of proper behavior. Behind this is the assumption that you can’t really do your job competently if people don’t view you as competent because your behavior suggests otherwise… even though your behavior has no necessary logical connection to your job performance. It is the appearance or performance of competence, then, that matters.

With Woods and Kavanaugh, we can see one figure ascendant, with his competence and character simultaneously restored; with the other man, both concepts are crashing rapidly because they are so closely linked. I am not suggesting that we do away with linking competence to character, or even if we could, given how hardwired it seemingly is to the American mindset, but we might want to start thinking about applying it more carefully and questioning whether the claims it makes are really warranted.

On Facial Hair

The NYT has an amusing article on Bolton’s mustache and the political history of the particular trim level.

I have a full beard and long hair past my shoulders, so I feel qualified to hold forth somewhat on this topic.

I hate shaving. Absolutely hate it. I started in middle school and it remained a painful, tedious process for over twenty-five years. I started with an electric razor, but moved to a blade as I got older. It never got comfortable, and I always cut myself and irritated my face. My neck was typically a series of open wounds. Stubble for me appears in 12 hours or so, so shaving every day was mandatory.

As I got older, I went longer and longer between shaves, and often stopped for a week or more during vacations. If I needed to go on a job interview, I would shave close, but typically went 3 days in between. Stubble became acceptable.

Then, around the time H was pregnant with L – when I was 39 or so – I grew a full beard during a vacation and simply neglected to shave it off. I had tried a beard in college once, but it itched so bad I gave up after a month. This time I persisted, and it paid off. After that first month, the itching stopped, and it was a revelation.

It helps, of course, that I have an occupation – professor – that has no special expectation on facial hair, or head hair, for that matter. Most men in my department maintain some facial hair (typically with some gray) like the chin beards that are fashionable lately, or advanced stubble, but I have the only full one.

So I come to the question – what does it mean to have facial hair or not? Well, for me, my long hair signifies – to me – that I’m not a suit. The beard is simply further evidence of this. Frankly, if you think about it, if a man has the genetics to grow facial hair, why fight it? Social expectations? Trends come and go. I wish I had not listened to every conforming male for decades that told me to trim that stubble.

Now I’m comfortable with my face. It expresses some of my personality. H says I look ‘blank’ without at least some facial hair, so there is that to consider as well. I have also noticed that people tend to take me more seriously now, even at work. I think it is more the gray than the beard, but I don’t think it hurts.

Ultimately, though, it is a personal decision. I couldn’t care less about fashion. Beards are a little more ‘in’ now, but so what?

If you are judging a person on appearance you are literally and figuratively engaging them at the shallowest level possible. Rather, examine if they are fair and compassionate. Little else matters, and I say that as an academic. Intelligence is common and easily bent to evil, and looks always deceive. A sense of justice and a care for decency, though, are rare and hard to replace.

So, moral: grow your hair any way you want that pleases you.

Try Again

So I was reading this piece in the NYT: “We Aren’t Built to Live in the Moment.”

I have some problems with its theory of mind:

Your brain engages in the same sort of prospection to provide its own instant answers, which come in the form of emotions. The main purpose of emotions is to guide future behavior and moral judgments, according to researchers in a new field called prospective psychology. Emotions enable you to empathize with others by predicting their reactions. Once you imagine how both you and your colleague will feel if you turn down his invitation, you intuitively know you’d better reply, “Sure, thanks.”

Ugh. I can see why this guy is all excited about this, but he’s missing some crucial ingredients. We may be planning creatures, true, and that is important, but our ‘plans’ are made up of present judgments that come very, very quickly, and the past is constantly bubbling up to influence those present judgments. To say we are prospective creatures is to oversimpify – rather, we are present creatures with complex pasts AND futures. Ask a victim of abuse or trauma whether or not they spent all their time thinking about the future, or whether someone from a poor background and low education (that pesky past) thinks about “the future” the same as someone from a middle-class background and decent education.

Emotions are not even remotely understood, but it’s a good starting point.

This article reminds me of one of my pet peeves, which is that we are in the dark ages of understanding the brain. Let me give an example.

At one point in my research in grad school I was very interested in how people read. Not how to teach people how to read, but how reading worked in the brain. An analogy might be wanting to know how an internal combustion engine worked instead of wanting to know how to drive a car.

So I read a lot of reading psychology. I was massively disappointed. I discovered that no one in the field had more than a vague idea of how reading worked. The brain was effectively a black box to them – the input and output was known, but what happened between people’s ears – they didn’t have the foggiest. Lots of theories, no evidence. We are literally sentient beings with brains and we have little idea how our brains work, at all.

I have come to find that pretty much all research into the brain is at this state. We are not much beyond poking physical regions of the brain with fingers and electricity to discover what does what. So I look upon supposed new avenues as total shots in the dark. Again, this is the dark ages.

Ultimately I don’t see any major innovations in this area until we do the very-thinkable building of an artificial brain. THEN we will know how one works – or we won’t, because too much of what a brain does is emergent. Just wedding “emotions” onto a computer isn’t going to do it.

To leap into my field for a moment, rhetoric is largely a study of how decisions are made based not on ironclad logic, but on emotions.  When Mr. Spock says on Star Trek that “it is not logical,” he is mistaken – if he were really telling the truth, which Vulcans are supposed to, he would say, “it is not emotionally satisfying to me at the present moment.” That’s not nearly as quotable, of course, and pointing out that EVERY time Spock says he is being “logical,” he isn’t, would take me all day.

Suffice to say, it’s true we are emotional creatures, but our past influences said emotions and we also make WRONG decisions very often. Frankly, the emotions are not very good at making decisions, especially when the RIGHT answer is not blindingly obvious. Self-persuasion helps, but often turns into rationalization, like Spock and his supposed “logic” when all he really has is certain values.

 

 

Specious reasoning

There’s an interesting piece on William Lane Craig here at the Chronicle: it reminds me strongly of a piece that the NYT did on Rush Limbaugh years ago.

Both men are of interest to me as a rhetorician because of the power of their speciousness.  Craig is a master of the Gish Gallop and other debating maneuvers, which I first noted after listening to a debate between him and Richard Carrier. His modus operandi is both predictable and devastating. I have to wonder why anyone accepts a debate with him when the odds are so heavily weighted in his favor; Craig is an apex predator of sorts, almost perfectly adapted to his statement/rebuttal/rejoinder environment.

The only effective defense against his tactics would seem to be either disengagement or incredulity (either of which he can dispatch as intellectual bluster!)

Another person Craig reminds me of other than Limbaugh is Ayn Rand, who still has followers. Both are dangerous entities to encounter as an undergraduate, who may lack (although some have) the philosophical depth to recognize what is specious reasoning and how what is specious reasoning can be persuasive despite its nature.

Measuring acceptability of arguments

Here’s a question worthy of some thought. At what point does an opinion become unacceptable? I’m talking about Santorum, not Fish, mind you, from the link.

Fish points out that Santorum’s position on church vs. state matters is not an outlier or crazy because “a number of Supreme Court justices and A-list legal academics” that hold similar views. Fish ends his initial defense with the following summation:

This of course does not mean that Rick Santorum is right; only that he is not a total outlier or a nutcase. His views, although perhaps less well expressed than they might have been, are well within the boundaries of a legal and political debate that has been going on for more than a century.

Note the language – outlier, nutcase, within the boundaries. Fish acknowledges there are boundaries, and that it is possible to be an outlier or a nutcase (my favored term for this is ‘spewing horseshit’), but due to a number of /acceptable/ authorities that hold similar positions, Santorum is not, to use my term, spewing horseshit, but working within a larger intellectual debate.

This reasoning presents a problem. Apparently, in order for an opinion to become acceptable, it need only be vetted by the presence of a sufficient number of similar opinions that have already become acceptable, by dint of qualifications, charisma, etc. By this bandwagon-style reasoning, if Santorum espoused support of cannibalism, as long as a few Supreme Court justices and A-list academics held the same position, his position would be just dandy. That sounds a lot like the moral relativism everyone likes to bash, and reminds me of Hume’s discussion of taste standards, which my rhetoric class discussed earlier this week.

What would be preferable? Judgment on the merits of the argument, not its popularity or whether or not high-placed individuals happen to think the same way. Of course this is not always possible. Such a value is more of an ideal target than a daily standard. The standards of acceptability slip on a regular basis, even with the most objective and impartial – I recall many of my graduate school professors in particular had blind spots you could drive a bus through, some of which they knew about, and I make no claim to not having a few myself. But it seems to me that making acceptability arguments based on popularity is a particularly dangerous habit that breeds complacency and retards actually thinking about the argument itself.

Another standard I particularly dislike is dismissing acceptable arguments because they are old, sometimes even only twenty years or so. Sometimes this is warranted, for example pre-WWII history monographs, or discussions of technology, but it shouldn’t be automatic and reflective. I run into this kind of thinking in biblical studies constantly, but that field doesn’t have a lock on it by any means. The reverse is even more insidious – dismissing arguments because they are new and don’t match existing acceptable thought.

Sam Harris’s The Moral Landscape

Another vacation book review.

There has to be a reasonable middle ground between the cultural relativism that Harris dislikes and the “New Atheist” hostility to religion that he champions. I can understand the attacks on the NAs because the lot of them, especially Dawkins, are often crass. Then again they’re sort of the professional poker players of the intelligentsia – a certain degree of crassness kind of automatically comes with the position.

Now I think criticism of religion is more than fair game. As Harris says in the book, he comes off arrogant only because he takes the claims of religion, particularly Christianity, seriously, and he does have a point that the usual faith/reason attempts at synergy end up being pretty ridiculous, as his lengthy example of Francis Collins shows.

I can’t buy his total dismissal of relativism and religion, though. Relativism has its flaws, but it at least pushes us toward a default position of tolerance rather than an automatic imperialistic judgment of superiority. And religion certainly has its flaws, but good can come of it – I’m not yet prepared to throw it out with the bathwater. It may be a ‘flawed science’, but that can easily be flipped around – Harris’ science is at times an unpersuasive religion, largely powerless against the straightforward power of family upbringing. A lot of die have to fall the right way for someone to drop their upbringing, family, and core beliefs for the cold – if best currently around – embrace of scientific humanism.

Penn State

Ah, Penn State. You’ll riot for your football coach getting fired for not calling the police when his former assistant coach was molesting boys nine years ago and before and after, but where’s the riot for an assistant coach molesting boys on the job and afterward with tacit approval from administration through lack of action?

I could turn the knife a little more and say this is merely the sign of the bizarre moral system inculcated by a college with a large money-hungry football program. But I don’t entirely believe that. This kind of public anger is reactive, reflexive – madness for madness’s sake – and ultimately, inwardly directed. Who wants to be played the fool for so long, looking up to someone who ultimately falls far short of sainthood? That’s enough, I think, to flip over a few cars.

I am starting to get a little old, and I have yet to meet a saint. Everyone to me mixes some good with some bad. A good reputation to me seems but a carefully manicured lawn – a sign that maintenance is being done regularly, but not much more. We are creatures fond of summation  – that person is good, and that person is bad – and I don’t think that quite covers the usual features of human nature.

Cell phone companies don’t know where you are. Really.

I dislike alarmist stories like this. Of course cell phone companies know where your phone is pretty much all the time. The phone wouldn’t work otherwise.

This is not the same, though, as knowing where YOU are. We are not our phones.

Amazingly enough, it is possible to set your phone down and go somewhere else without it. People used to do it all the time, I’ve heard. As such, the records held by cell phone companies are of extremely limited usefulness in tracking anyone with a brain, especially if you consider the ease by which pay-as-you-go- phones and the internet can be used for anonymous communications, to name two easily accessible alternatives.

It is also possible, believe it or not, to turn your phone off from time to time, or, even, to place it in a so-called ‘flight mode’ or ‘airplane mode’ that disables cellular transmission.

The incommensurability of iphones and staplers

Interesting piece on Thomas Kuhn chucking an ashtray at a graduate student. It’s witty, but not really funny, as I have an big issue with the author’s seeming bewilderment with the term “incommensurability.” The endnote in the article ignores (or is possibly completely ignorant of) how the term has a long history that predates Kuhn, in favor of extending a poor joke – and, also, that Kuhn’s use of the term is pretty straightforward with minor context clues.

For an example of incommensurability as Kuhn  uses it, there’s an iphone and a stapler on my desk.

These are incompatible technologies. You can’t staple an iphone (at least not with a Swingline – your local Home Depot has some that would, though) successfully, and you can’t place a call to a stapler or connect to it via Bluetooth. You could bang them against each other, but I’d be hard pressed to call that compatibility; the stapler is meant to staple pages, and the iphone’s many functions have nothing to do with staples.

However, an iphone and a stapler are not incommensurable. You can TRY to make them interact – staple the iphone (not recommended), call the stapler. The failure is highly probable, but you are not precluded from trying.

Let’s imagine, however, an iphone forever separated from a stapler by a heavy, thick stone wall. The stapler can’t physically get to the phone to try and staple it, and the iphone can’t get a signal to the stapler. They can be aware of each other’s existence, but that knowledge is trivial, as interaction is literally impossible and this makes mere incompatibility also trivial. In this situation, the iphone and the stapler are not only incompatible, but incommensurable.

That Kuhn uses this word to define the relationships between scientific paradigms says, therefore, quite a bit. He is also leaning upon the older definition that suggests a weighing or measuring between theories that is somehow  rendered impossible.

Merit

I’ve been wanting to write something on merit for awhile. I think this has a lot to do with it, PR for her book aside. I don’t accept that parenting has only two extreme sides. Much of her claim comes from a ridiculously small sample size – well, her children excelled after her brand of parenting, so ALL children will, and all children who don’t have this kind of parenting will in turn not excel at anything. Or, rather, this is what is implied by the excerpt.

I have problems with the whole idea of a meritocracy, though I’ve not gotten to the point that I can articulate them quite yet. It has something to do with self-worth and external validation, citizens vs. non-citizens, percentage of the population with “talent” or “competence” at specific activities, the reliability of education, knowing something vs. using that knowledge, celebrities as role models, and the American ideal of everyone going to college. Something seems deeply wrong to me. It’s not an issue of the world being “fair,” because, frankly, it is not. The perpetuation of the American dream is a civilization-level lie, but again, that’s not quite what I’m disturbed by.

Perhaps it is that some people accept America as flawed, but hold that it is the best system available. This passive judgment sacrifices a half-ton of ideals. Flaws are rendered permanent. There is no need to look for better systems or progress. There is only the struggle between parties, between who is right and who is wrong. The rules don’t change.

This line of thinking reminds me of one of my favorite movies, Dirty Harry. When it came out, one of the main criticisms was that it was a fascist vigilante fantasy, which has always struck me as a classic example of a bad reading. The film isn’t about how Harry is some kind of ideal vigilante wantonly pissing on the law as he executes criminals without due process. It’s about frustration.

I’m hard pressed to think of a film where the protagonist is more frustrated. Harry hates his superiors for being ineffectual and cowardly. He hates mundane bank robbers who won’t let him finish his hot dog. He hates suicidal idiots that waste his time. He hates rapists. He hates serial killers. He hates getting a confession thrown out because he tortured a man to make him reveal the location of a woman who was buried alive, and didn’t have a warrant to search his lodgings. He hates that his wife was senselessly killed by a drunk driver. He hates that his partners regularly get shot and then drop out of the game. He hates society for being cheap and tawdry, and for letting innocents be terrorized and killed. And he really, really hates having to do something about it, because it corrupts him and turns what would be meritorious – a strong desire for justice – into a disgrace. He has the same line-crossing problem that Batman has, though Harry has a far easier solution due to his willingness to shoot people dead. Magnum Force explores the same idea, but not nearly as well, though the talk between Harry and Briggs at the end is interesting because it clarifies how he has compromised with what he calls “the system.”

I don’t like the system that much either. Human life is valued semi-randomly. It’s not universally cheap (the middle class is still quite large), but neither is it uniformly expensive (no shortage of homeless people). Class matters, race matters, gender matters, money matters, fame matters, beauty matters, ambition matters, intelligence matters, and education matters. Of these, only the last three are viewed as completely neutral and dependent on individual free will. The rest you are either born into, or acquire through luck, misadventure, and/or application of the last three.