New name?

For now I have placed the ‘rhetoricalcritic’ and ‘badrhetoric’ domains so they both point to the blog. But this kind of avoids the issue – namely, should I change the name of the blog?

‘badrhetoric’ was kind of a lowest common denominator choice back in the day (2006) but now I am slightly less snarky, and I have carved out a small niche where I publish. I am tempted to just make the big switch in total. Any thoughts?

Bandwagon arguments in academia

One of my pet peeves when reading academic arguments is the persistent and lazy use of the bandwagon fallacy – i.e. “many people think X, so X is right.” Although, in this particular version, it is more along the lines of “The vast majority of qualified scholars in this subfield think X, so X is right.”

Where should I begin my critique, I wonder? That popularity is no guarantee of validity? That popular ideas deserve to be interrogated just as much as unpopular ones? That the unprofessional arrogance displayed by using this fallacy is only trumped by its stupidity? That taking such a position attempts to cut off future productive scholarship at the knees? And, perhaps finally, that using it is a sure sign of the weakness of one’s position?

Yes, this is a target-rich environment, to be sure. Let’s try some examples.

Exhibit A: “Best Practices”

If I had a nickel for everytime someone appealed to “best practices” in my semi-home field of rhetoric and composition and its sister technical communication, I would be able to take my family out to a series of nice dinners.

Behind the concept of “best practices,” it turns out, is a crude bandwagon argument. To follow “best practices” in teaching in tech comm, for example, is to use the techniques that are well attested in the scholarship, supported by “name” academics whose “names” can be dropped liberally in conversation, and that are ultimately safe and uncontroversial.

Screw that.

I don’t care if 99.9% of the members of NCTE (National Council of Teachers of English, BTW) support a given mode of instruction. I only care about whether or not it works. Show me whether or not it works – not how popular it is, or what academics happen to endorse it. Give me evidence, not sponsorship.

I have known very few real top-flight scholars in my career thus far. If they have something in common, though, it would be that none of them follow trends or take a poll before they plant a flag. The pursuit of knowledge eschews such petty and empty considerations – and so does logic. Someone dedicated to such an ideal would never use popularity as evidence of anything except popularity. Academic arguments are to be evaluated on their own merits, not on whether or not they are in season.

So, in short, while “best practices” might have once had a more innocent connotation, now it just makes me irritable. It represents the worst of academia, when it is at its pettiest – when it is political.

Exhibit B – A Historical Jesus

I’m gearing up to teach the Synoptic Problem in “Studies in Religious Texts” again, so this has been on my mind of late. One of the subtopics that naturally comes up with the SP is how much of the gospel materials are based on any historical Jesus – which then leads to whether there was a historical Jesus, and if so, what can we say about him?

“Mythicist” arguments, arguing that Jesus has no historical basis and instead is a kind of assembled myth, are as old as the hills, dating back to the first pagan critics of Christianity. I’m agnostic on the issue due to what I see as a failure of everyone writing or speaking on the matter to make a decisive case (due to the paucity of evidence in any direction) but I am frankly peeved at the standard position – that mythicism is nonsense because no mainstream biblical studies or religious studies academic thinks there wasn’t a historical Jesus.

Now, I hardly need to point out at this point in my post that such an “argument” is one big bandwagon fallacy (as well as an argument to authority, but I’ll leave that one for some other day). It is telling a questioning undergraduate to sit down and shut up, pulling rank, asserting the primacy of one’s subdiscipline, and being an arrogant twerp, all at once. These are all things I despise and oppose.

So I have a certain sympathy for the mythicists as underdogs. That doesn’t mean they are right – they still have to make a case, and so far no smoking gun has appeared – but they have a decent case that is just as strong as the default one.

So why do they get such a hostile reception? Why the flippant and repeated use of the bandwagon fallacy in response (occasionally laced with a choice insult about one’s employment prospects, educational background, and sanity)?

Well, let’s return to rhetcomp for a moment. The most telling and long-lived idea in rhetcomp is process pedagogy – the belief that writing is a “process” rather than a “product” and should be taught accordingly as a series of repeating and mutally informing steps instead of emphasizing the text that results. Now, feel free to correct me if I’m wrong, but I can’t think of a single instance of a “process” compositionist slapping down anyone who challenged or questioned process by saying, “The vast majority of composition academics support process theory. Therefore, your argument is a fringe belief and  not worthy of a full treatment.” If such a pretentious mandarin exists, please send me a citation, but I don’t think one does.

Now, at the same time, there is that old chestnut mentioned before – “best practices” – that is used instead to enforce consistency. But as it turns out, “best practices” is mostly political cover, because it can mean whatever the instructor wants it to. Composition is a field full of rugged individualists. Some are old-school grammar mavens, some are process fanatics, some are post-process theorists, and others are expressivists, and others (really most) defy easy categorization. We know how to selectively cite. Some of us resist this, of course, but not all – not even most.

Back to the historical Jesus. There is a great wiki page that has collected countless putdowns of mythicists:

https://en.m.wikiquote.org/wiki/Christ_myth_theory

(they are all down near the bottom).

Perusing them will reveal that they are basically all variants of the same technique: bandwagon fallacy + insult to education, occupation, or sanity + optional ridiculous comparison to Holocaust denial.

Why are they all the same? Why so prevelant?

First, there is no downside. Picking on mythicists is a risk-free power projection. It’s functionally no different than a bunch of jocks stuffing a nerdy kid into a locker. I have more power than you, so in the locker you go. There is no penalty.

Second, more fundamentally, the nerdy kid is a existential threat. He represents a counterargument to the jocks’ primacy – that logic and curiosity might trump their relative powerlessness outside of the artificial world of the school. Similiarly, the biblical studies folks know their authority is severely limited outside of academia. Outside of it, free thought reigns. Can’t have that. The existing pecking order must be maintained, at least temporarily. In the locker you go.

In a perfect world, biblical studies academics would lay open the question of a historical Jesus. But in order to do that they would have to open their minds. And if you think the average person has trouble with that little task… well. It’s not a question of a threat to existence of the discipline. Opening up the question would doubtlessly lead to an explosion of relevant literature. It would be good for the field, showcasing at last a bit of historical respectability.

But the possibility is a clear a threat to individual egos – which is why I think the jock-bully comparison is apt. There is nothing more fragile than a bully’s ego. It has to be constantly fluffed and pampered like Donald Trump’s psuedo-hair, otherwise it falls apart. Why? Because, ultimately, there isn’t much under the combover. There is no defense for a historical Jesus that doesn’t special plead Christian sources – which brings me to my favorite example.

Exhibit #3 – The Book of Mormon

The non-Mormon academic consensus is that Joseph Smith, the founder of Mormonism, was a fraud. The Book of Mormon was not written from golden plates handed over by the angel
Moroni, but cobbled together from 19th century mythicism and the KJV. The jocks are very clear about this.

However, there is another body of academics that call themselves experts on the Book of Mormon – and they are all Mormons. They have all kinds of arguments supporting the authentic nature of the text, including sworn eyewitness statements – the famous “Three” and “Eight” – to the existence of the golden plates, literary analysis showing its originality (check out Orson Scott Card’s defense sometime – it’s fascinatingly doltish).

So there is a problem here, namely that there is more historical evidence for the inspired composition of the Book of Mormon than there is for Jesus – despite the fact that the form of the offered evidence – multiple “eyewitnesses” – is basically the same. And yet the mainstream historicans make quick sport of Smith, and defend Jesus’s historicity to the death.

How, do you wonder, can they expose as a fraud the recent formation of a religion so easily, but secure certain historicity for someone supposedly dead for nearly two thousand years for which we have no reliable non-Christian attestation?

The reason the dice keep coming up seven and eleven is not the incredible luck of biblical studies. It’s because the dice are loaded. And if you point this out? Well, the majority of academics support X. Back in the locker, you.

One more thing.

Another quality I have noticed in quality scholars, as opposed to average academics, is that they almost never defend anything. Instead, they assault. It might be an unexplored area, or an old position or subject has been neglected, or a trend that has spiraled out of control – but they are always aggressive, constantly stalking and pouncing like half-starved tigers, relentlessly seeking improved understanding.

Playing defense is, after all, the slow death of anything resembling intellectualism. You trade a life of seeking new ideas and understanding in for the apologetic goal of preserving the beliefs of the past, usually in exchange for minor power of some sort – employment, tenure,  social respectibility, money – the usual earthly rewards. Maybe you get paid in spirtual coin, but either way, sounds like a devil’s bargain for me.

But what do I know? I’m just an English professor, of questionable sanity, and probably deny the Holocaust in my spare time. My arguments couldn’t possibly have any merit. I’m a member of the lunatic fringe – a crackpot, a vertifable crank, a babbling child talking of adult things he couldn’t possibly comprehend.

And that is how the bandwagon fallacy is essentially the ad hominem fallacy in another guise; by elevating the group, it savages the individual. This is why it deserves the fiercest opposition we can muster.

Brief Rant

I have been feeling depressed lately about my research and publishing prospects. I’ve accomplished a fair amount since I finished my Ph.D., but I don’t feel professionally or emotionally fulfilled by any of it.

I haven’t published anything since early 2016. Much of my time in between has been taken up by two articles, one which has been rejected three times by good journals despite interest, and the second of which is promising, but slow to develop.  I’m not sure what I’m doing wrong, but whatever it is, I’ve slowed down.

It would be easy to attribute this decline in production to my son Luke, who turned 2 last April. But I don’t. I generally gain strength from him. He makes me laugh.

It would also be easy to attribute this decline to the fact that I have started to write more edgy stuff in articles than in my previous pieces.

My dissertation (aside from the first chapter, which appeared in Rhetorica) remains unpublished, I have found, due to that its conclusions don’t align with contemporary Christianity or conservative biblical criticism.  I have shopped it everywhere and found no takers. I consider this a massive failure on my part, even though I know it isn’t. It’s a people problem.

To sum it up, my diss argues that pretty much the entire ‘life of Jesus’ part of the Gospel of Mark (everything beside the Passion narrative – the arrest and the crucifixion) is a work of rhetorical fiction. This means Judas is a fictional character inserted for drama, John the Baptist (while a real person!)  never had anything to do with Jesus, and all of the post-resurrection appearances are late additions. Those three observations are chapters. Ultimately, I hold the gospels are not four buttressed eyewitness accounts, but competing fictional narratives as they openly plagarize each other in a quest to control the Jesus narrative – which was created by the author of Mark in the first place!

In retrospect I should have seen the problem, though – it threatens too many people. Even if I point to all the form criticism that basically spells it all out, it doesn’t matter. It’s too edgy, even though I find it to be remarkably commonsensical.  I wonder, though, if I should try to build up to it through a series of smaller articles. I have only toyed with sending out the individual chapters. Chapter 1 found a home, but only after many years.

Anyway.

There is also my half-secret hobby as a novelist. I have written three larger works of fiction. The first was about 60,000 words and what I would call today fan fiction. Practice. The second was 190,000 words, much better, had an agent, nothing happened. Self-published, which was a mistake, back in 2003. Very few readers. Bummed me out for over a decade. I’ve read far worse, so that’s another disappointment.

Two years ago, though, I wrote another, about 80,000 words. Thought I had a winner. Sent query letters to over 100 agents. No bites. Abandoned the project. Then I started writing a sequel, which was odd behavior, even for me. I felt like the characters could have another go. This has made me think that I should approach publishers directly. But I feel frozen by the likely outcome.

I think I’ve been burned too much. There is only so much negativity that I can bear and it’s starting to wear. I need a win occasionally to justify continued effort. I just don’t know right now where I’m going to get one.  I have a lot of germinal article ideas, but there are so many that it’s hard to pick just one and bang it out.

This feeling will probably pass. I just have to find a way around it.

Try Again

So I was reading this piece in the NYT: “We Aren’t Built to Live in the Moment.”

I have some problems with its theory of mind:

Your brain engages in the same sort of prospection to provide its own instant answers, which come in the form of emotions. The main purpose of emotions is to guide future behavior and moral judgments, according to researchers in a new field called prospective psychology. Emotions enable you to empathize with others by predicting their reactions. Once you imagine how both you and your colleague will feel if you turn down his invitation, you intuitively know you’d better reply, “Sure, thanks.”

Ugh. I can see why this guy is all excited about this, but he’s missing some crucial ingredients. We may be planning creatures, true, and that is important, but our ‘plans’ are made up of present judgments that come very, very quickly, and the past is constantly bubbling up to influence those present judgments. To say we are prospective creatures is to oversimpify – rather, we are present creatures with complex pasts AND futures. Ask a victim of abuse or trauma whether or not they spent all their time thinking about the future, or whether someone from a poor background and low education (that pesky past) thinks about “the future” the same as someone from a middle-class background and decent education.

Emotions are not even remotely understood, but it’s a good starting point.

This article reminds me of one of my pet peeves, which is that we are in the dark ages of understanding the brain. Let me give an example.

At one point in my research in grad school I was very interested in how people read. Not how to teach people how to read, but how reading worked in the brain. An analogy might be wanting to know how an internal combustion engine worked instead of wanting to know how to drive a car.

So I read a lot of reading psychology. I was massively disappointed. I discovered that no one in the field had more than a vague idea of how reading worked. The brain was effectively a black box to them – the input and output was known, but what happened between people’s ears – they didn’t have the foggiest. Lots of theories, no evidence. We are literally sentient beings with brains and we have little idea how our brains work, at all.

I have come to find that pretty much all research into the brain is at this state. We are not much beyond poking physical regions of the brain with fingers and electricity to discover what does what. So I look upon supposed new avenues as total shots in the dark. Again, this is the dark ages.

Ultimately I don’t see any major innovations in this area until we do the very-thinkable building of an artificial brain. THEN we will know how one works – or we won’t, because too much of what a brain does is emergent. Just wedding “emotions” onto a computer isn’t going to do it.

To leap into my field for a moment, rhetoric is largely a study of how decisions are made based not on ironclad logic, but on emotions.  When Mr. Spock says on Star Trek that “it is not logical,” he is mistaken – if he were really telling the truth, which Vulcans are supposed to, he would say, “it is not emotionally satisfying to me at the present moment.” That’s not nearly as quotable, of course, and pointing out that EVERY time Spock says he is being “logical,” he isn’t, would take me all day.

Suffice to say, it’s true we are emotional creatures, but our past influences said emotions and we also make WRONG decisions very often. Frankly, the emotions are not very good at making decisions, especially when the RIGHT answer is not blindingly obvious. Self-persuasion helps, but often turns into rationalization, like Spock and his supposed “logic” when all he really has is certain values.

 

 

Randomness and Teaching

Well, Trump won, and I suppose I will comment on that at length at some point. But I want to discuss something else.

I was thinking this morning about the randomness inherent in making decisions. Think of a path that forks left or right with no clues as to what follows  – what makes you choose left or right? SOMETHING does. Back when I knew something about programming – 1990? – we would use random number seeds based on the system clock if we needed a semi-random number. I have to wonder if the circadian system offers the brain a similar out.

That (random?) thought said, I am a general fan of randomness when teaching. I don’t have a lot of formal structure, usually, other than a vague ‘we are going to discuss X,’ or ‘we are going to do this exercise together to master Y,’ or ‘we are going to play a game in order to learn Z’.  I leave the creation of teachable moments to chance; I figure the friction created by me, the students, and the material rubbing together is going to create sparks that I can then turn into a fire. Once I have a fire going, then the class takes on a life of its own and all I have to do is enjoy the heat.

I do prepare graduate courses differently than undergraduate ones, though. I waltz into undergraduate ones and lecture extemporaneously as know the material really well.  For graduate classes, even though I still know the material, I usually prepare a page or two of bullet points and questions that I want to hit. It’s more of an emergency blanket; if the class discussion slows or meanders, I have my page to lean on to restart things.

So I guess what I’m trying to get at is that I rely on a certain degree of unpredictability when teaching. I make a lot of teaching decisions on the fly and instinctively rather than planning them out. Planning is valuable, and I sometimes do a fair amount of it, but it’s become necessary for me over the years to react quickly to conditions in a class.

FYC is different, though (I mostly teach upper-division PW and rhetoric). FYC students need structure; my more freewheelin’ style doesn’t mesh well with freshmen that come to the course feeling lost technically, socially, and materially. They don’t do well with abstract thought or ethical dilemmas. They don’t necessarily know how to answer, or ask, good questions. They are not as comfortable with ambiguity as I am. So I have to adjust and break the course into discrete, predictable units. It doesn’t please my personality, but adaptability – even random – is the essence, I think, of decent teaching.

The impossible task

One of the more interesting things that I noticed when I first started studying rhetorical theory is that some rhetorical situations are impossible tasks. Everyone, I think, at one time or another, has encountered an audience that cannot,  or, rather, would not, be moved.

The distinction between ‘cannot’ and ‘would not’ is important; if an audience cannot be moved – if there is some gulf of values that somehow cannot be crossed by any conceivable method  – then that is one thing, to say that rhetorical power has limits.  But if an audience refuses motion – if it chooses not to move when it could have – that implies something else, that namely, the audience has all the real power, and we should speak less of rhetorical power and more about audience power. Rhetoric becomes more of a curious byproduct – a residue of an interaction – than a means to an end.

So if audiences can choose not to be moved, all rhetorical situations are impossible tasks. People cannot be persuaded – rather they choose to persuade themselves in the light of certain situations or stimuli.

Where does this place the so-called persuasive speaker, the charismatic, the leader? Obviously some people can move others and are demonstrably better at it than others, right? So I think that the power to refuse movement is present but not always used, comparatively. It would require a mechanism that is the reverse of cognitive dissonance; that is to say, instead of rationalization in the face of dissonant input, there is an resistance to information that does make sense to the listener – an unwillingness to move, to listen, to process. I may be equivocating between “dissonant input” and “makes sense”

Ferguson speech

The prosecutor in the Ferguson case, Robert McCulloch, gave a very interesting speech last night while announcing the grand jury’s decision. I am particularly interested in it because of the extensive use of moderating language, given that I have published a piece recently on moderation.

Over and over again, McCulloch stressed that the grand jury had worked extremely hard and that every piece of possible evidence had been extensively weighed and considered, and that the process was fair and impartial and had considered every angle. This must have been 90% of his prepared remarks and much of it predicated the actual announcement of the grand jury’s decision. The other 10% was criticizing the media. The announcement of the decision was almost anticlimactic given the amount of apology that preceded it.

Needless to say, all this moderating language as an apology for the decision could not have possibly succeeded. Ultimately the speech could do little more than reinforce the beliefs those who believed the shooting was justified, and anger those that thought the incident was some form of murder. In short, McCulloch was in a no-win situation, rhetorically – there is literally nothing he could have said that would change anyone’s reaction to the news. About the only way he could have done worse is to not give the speech at all.

Review of The Centrality of Style

There is a new review out of my co-edited (with Star Vanguri) book, The Centrality of Style, in the journal Pedagogy. It is very flattering about the contents and the authors. It is written by Gretchen Dietz.

I can’t link directly to it as it requires a subscription, but I can link to the journal, and suggest accessing it through a library.

New publication

Well, the title here is misleading. I have a new article forthcoming on moderation (see the About page) but I co-wrote it four years ago.

It has been quite the journey to get it published. For a long time I considered it an example of how peer review occasionally doesn’t work, because I and my co-author are at that point in our careers when we can smell whether something is publishable or not. And this piece has always had that distinctive smell, but no one was biting. I’m glad that it will have an audience now.