The good and the bad and perhaps some ugly

My losing streak for journal submissions has finally dried up. Two strong R&Rs that look promising have arrived, along with another crushing and disappointing rejection where I apparently managed to erase scholars of color, queerness, and other diverse groups. Or at least that’s what I’m told.

I remember the first time I got a competent rejection. It was in 2008, while I was finishing my doctorate. I sent a piece to Rhetoric Review and the longtime editor, Theresa Enos, sent back a short letter that more or less said, “Sorry, but there’s not enough there there.”

I thought about this, and I realized she was right. The thesis was of the “gee, this is interesting” variety and ultimately not useful to anyone. After a lengthy revision, the next journal started with what seemed to be a solid R&R but became a bait and switch with a third reviewer who thought they had found the critical flaw in my argument (they hadn’t; they just hadn’t read closely the two-page rebuttal in the middle of the piece), but it hit on the third journal. This added up to a three-year delay in publication, of course. I don’t know how folks that don’t write essentially evergreen argument manage to sustain careers.

Speaking of Rhetoric Review, until recently I didn’t know it was the first rhetcomp journal to require peer review, in the early 80s. Which makes me think – most of the classic scholarship of rhetcomp is in the 70s. I wonder.

Some professional updates.

Another rejection came in, this one for a co-authored piece.

I can’t figure out what it was rejected for, though; the editor gives no hint, and the reviewers didn’t find anything serious, just trivia. One of them even praises it in their opening (misstating the thesis, of course) like it’s an acceptance or at least an R&R, and doesn’t undercut it afterward.

So my overall reaction is… huh. Like the other two rejections I’ve gotten this year, there is little evidence they understood the argument at all. I have communicated to my co-author, a grad student, that this is not surprising, and the long process continues.

In brighter news, I just completed a very lengthy, complicated R&R that I have been sweating for months, so the same day one comes back, another goes out! There is no other productive response to rejection, especially silly ones, but persistance.

Split

Another rejection. This time it was a split; one reviewer accepted with no changes, and the other rejected outright.

It was a tough journal, but it was a good article, and I’d hoped for more. In particular, I’d hoped the reviewers would actually engage my argument in a way that might suggest that they had read it. Neither did. The quest continues.

My son, seeing I was annoyed, gave me a cheese cracker. That made it better.

Rejection

Got a rejection on an article today. Both reviewers rejected it- no R&R.

I have two firm rules about what I do when a paper of mine is rejected by a journal.

  1. Do not send that journal another piece until the editor changes.
  2. Reflect on the positives rather than the negatives.

The first rule is based on an old lesson that took me forever to learn. I don’t think I fully learned it until I was about 30. Maybe even later. Namely, do not try to win the favor of someone who doesn’t like what you are doing. Not only it is demeaning, but it’s a total waste of time.

In this case, the review took five months, and anyone that can’t find anything good in my ideas in five months is not worth trying to please. (I take one week to do a peer review. Maybe a week and a half. Tops.)  The bit about the editorship is mostly wishful thinking on my part, based on a belief – erroneous and idealistic, of course, but I cling to it – that the editor bears the responsibility for accepting or rejecting, not the reviewers. There are a few journals that have the same editor for decades; I have learned to avoid those.

The second rule is also a practical one. The negatives are considerable – no acceptance or R&R in a journal I had specifically written the piece for – and two ‘peers’ that couldn’t find anything redeemable in my ideas, which I had shared with several colleagues and generated some excitement. That’s a professional blow to anyone.

But the positives are also considerable. One reviewer dwells on that 1) my article wasn’t ‘rigorous’, and 2) they couldn’t find a reason for it existing, and cheap shots like that tell me I hit a nerve – and that’s very interesting, given that I wrote a rather harmless theory piece that shouldn’t have pissed off anyone. The other reviewer said much the same thing – and both finger-wagged about how I had not  cited enough literature, but only mention two additional citations that I could have literally run circles around. It is entirely possible (actually it’s a quite common occurrence) that one or both wrote the citations in question…

Lit reviews are trivial. They can be easily added or omitted. It’s a dumb reason to reject a paper. Ideas are far more rare. The refusal to engage the thesis meaningfully is more telling.

My conclusion, therefore, is that the journal that I chose and designed the piece around was a mistake. It contradicted the core assumptions of the reviewers about how such an idea was to be handled with ‘rigor’, and they pushed back hard with numerous technicalities that could have been easily resolved in an R&R. Instead, hard reject.

So I made a mistake. Wrong venue, and possibly wrong subfield. That’s a positive – I learned something. I won’t make that mistake again when I revise. Or I will, in which case I will adapt again. It remains a good piece, and it will find a home. I will sift through their comments and use some of them (one reviewer was much more helpful in this regard than the other), but I will also disregard the spurious.

 

 

 

 

 

Funded Leave & Other News

I got an email tonight saying that I have been awarded a Funded Faculty Leave (FFL). Other universities call this a sabbatical. In this case, it means that I will get paid to do little but research for one semester (likely Spring 2019) – no teaching and no service (though I suspect some will sneak in). This is good news. I will use it to draft a new book on the gospels and rhetoric.

In other news, I sent out another article today. That makes four journal submissions in four months. M is due May 7; I should be able to do one more, a chapter in a collection, before he arrives. Actually, I already have a draft, it’s mostly editing at this point.

I could grade, but it’s late, almost 9:15. I think some time with Stellaris is in order…

On Facial Hair

The NYT has an amusing article on Bolton’s mustache and the political history of the particular trim level.

I have a full beard and long hair past my shoulders, so I feel qualified to hold forth somewhat on this topic.

I hate shaving. Absolutely hate it. I started in middle school and it remained a painful, tedious process for over twenty-five years. I started with an electric razor, but moved to a blade as I got older. It never got comfortable, and I always cut myself and irritated my face. My neck was typically a series of open wounds. Stubble for me appears in 12 hours or so, so shaving every day was mandatory.

As I got older, I went longer and longer between shaves, and often stopped for a week or more during vacations. If I needed to go on a job interview, I would shave close, but typically went 3 days in between. Stubble became acceptable.

Then, around the time H was pregnant with L – when I was 39 or so – I grew a full beard during a vacation and simply neglected to shave it off. I had tried a beard in college once, but it itched so bad I gave up after a month. This time I persisted, and it paid off. After that first month, the itching stopped, and it was a revelation.

It helps, of course, that I have an occupation – professor – that has no special expectation on facial hair, or head hair, for that matter. Most men in my department maintain some facial hair (typically with some gray) like the chin beards that are fashionable lately, or advanced stubble, but I have the only full one.

So I come to the question – what does it mean to have facial hair or not? Well, for me, my long hair signifies – to me – that I’m not a suit. The beard is simply further evidence of this. Frankly, if you think about it, if a man has the genetics to grow facial hair, why fight it? Social expectations? Trends come and go. I wish I had not listened to every conforming male for decades that told me to trim that stubble.

Now I’m comfortable with my face. It expresses some of my personality. H says I look ‘blank’ without at least some facial hair, so there is that to consider as well. I have also noticed that people tend to take me more seriously now, even at work. I think it is more the gray than the beard, but I don’t think it hurts.

Ultimately, though, it is a personal decision. I couldn’t care less about fashion. Beards are a little more ‘in’ now, but so what?

If you are judging a person on appearance you are literally and figuratively engaging them at the shallowest level possible. Rather, examine if they are fair and compassionate. Little else matters, and I say that as an academic. Intelligence is common and easily bent to evil, and looks always deceive. A sense of justice and a care for decency, though, are rare and hard to replace.

So, moral: grow your hair any way you want that pleases you.

New name?

For now I have placed the ‘rhetoricalcritic’ and ‘badrhetoric’ domains so they both point to the blog. But this kind of avoids the issue – namely, should I change the name of the blog?

‘badrhetoric’ was kind of a lowest common denominator choice back in the day (2006) but now I am slightly less snarky, and I have carved out a small niche where I publish. I am tempted to just make the big switch in total. Any thoughts?

Bandwagon arguments in academia

One of my pet peeves when reading academic arguments is the persistent and lazy use of the bandwagon fallacy – i.e. “many people think X, so X is right.” Although, in this particular version, it is more along the lines of “The vast majority of qualified scholars in this subfield think X, so X is right.”

Where should I begin my critique, I wonder? That popularity is no guarantee of validity? That popular ideas deserve to be interrogated just as much as unpopular ones? That the unprofessional arrogance displayed by using this fallacy is only trumped by its stupidity? That taking such a position attempts to cut off future productive scholarship at the knees? And, perhaps finally, that using it is a sure sign of the weakness of one’s position?

Yes, this is a target-rich environment, to be sure. Let’s try some examples.

Exhibit A: “Best Practices”

If I had a nickel for everytime someone appealed to “best practices” in my semi-home field of rhetoric and composition and its sister technical communication, I would be able to take my family out to a series of nice dinners.

Behind the concept of “best practices,” it turns out, is a crude bandwagon argument. To follow “best practices” in teaching in tech comm, for example, is to use the techniques that are well attested in the scholarship, supported by “name” academics whose “names” can be dropped liberally in conversation, and that are ultimately safe and uncontroversial.

Screw that.

I don’t care if 99.9% of the members of NCTE (National Council of Teachers of English, BTW) support a given mode of instruction. I only care about whether or not it works. Show me whether or not it works – not how popular it is, or what academics happen to endorse it. Give me evidence, not sponsorship.

I have known very few real top-flight scholars in my career thus far. If they have something in common, though, it would be that none of them follow trends or take a poll before they plant a flag. The pursuit of knowledge eschews such petty and empty considerations – and so does logic. Someone dedicated to such an ideal would never use popularity as evidence of anything except popularity. Academic arguments are to be evaluated on their own merits, not on whether or not they are in season.

So, in short, while “best practices” might have once had a more innocent connotation, now it just makes me irritable. It represents the worst of academia, when it is at its pettiest – when it is political.

Exhibit B – A Historical Jesus

I’m gearing up to teach the Synoptic Problem in “Studies in Religious Texts” again, so this has been on my mind of late. One of the subtopics that naturally comes up with the SP is how much of the gospel materials are based on any historical Jesus – which then leads to whether there was a historical Jesus, and if so, what can we say about him?

“Mythicist” arguments, arguing that Jesus has no historical basis and instead is a kind of assembled myth, are as old as the hills, dating back to the first pagan critics of Christianity. I’m agnostic on the issue due to what I see as a failure of everyone writing or speaking on the matter to make a decisive case (due to the paucity of evidence in any direction) but I am frankly peeved at the standard position – that mythicism is nonsense because no mainstream biblical studies or religious studies academic thinks there wasn’t a historical Jesus.

Now, I hardly need to point out at this point in my post that such an “argument” is one big bandwagon fallacy (as well as an argument to authority, but I’ll leave that one for some other day). It is telling a questioning undergraduate to sit down and shut up, pulling rank, asserting the primacy of one’s subdiscipline, and being an arrogant twerp, all at once. These are all things I despise and oppose.

So I have a certain sympathy for the mythicists as underdogs. That doesn’t mean they are right – they still have to make a case, and so far no smoking gun has appeared – but they have a decent case that is just as strong as the default one.

So why do they get such a hostile reception? Why the flippant and repeated use of the bandwagon fallacy in response (occasionally laced with a choice insult about one’s employment prospects, educational background, and sanity)?

Well, let’s return to rhetcomp for a moment. The most telling and long-lived idea in rhetcomp is process pedagogy – the belief that writing is a “process” rather than a “product” and should be taught accordingly as a series of repeating and mutally informing steps instead of emphasizing the text that results. Now, feel free to correct me if I’m wrong, but I can’t think of a single instance of a “process” compositionist slapping down anyone who challenged or questioned process by saying, “The vast majority of composition academics support process theory. Therefore, your argument is a fringe belief and  not worthy of a full treatment.” If such a pretentious mandarin exists, please send me a citation, but I don’t think one does.

Now, at the same time, there is that old chestnut mentioned before – “best practices” – that is used instead to enforce consistency. But as it turns out, “best practices” is mostly political cover, because it can mean whatever the instructor wants it to. Composition is a field full of rugged individualists. Some are old-school grammar mavens, some are process fanatics, some are post-process theorists, and others are expressivists, and others (really most) defy easy categorization. We know how to selectively cite. Some of us resist this, of course, but not all – not even most.

Back to the historical Jesus. There is a great wiki page that has collected countless putdowns of mythicists:

https://en.m.wikiquote.org/wiki/Christ_myth_theory

(they are all down near the bottom).

Perusing them will reveal that they are basically all variants of the same technique: bandwagon fallacy + insult to education, occupation, or sanity + optional ridiculous comparison to Holocaust denial.

Why are they all the same? Why so prevelant?

First, there is no downside. Picking on mythicists is a risk-free power projection. It’s functionally no different than a bunch of jocks stuffing a nerdy kid into a locker. I have more power than you, so in the locker you go. There is no penalty.

Second, more fundamentally, the nerdy kid is a existential threat. He represents a counterargument to the jocks’ primacy – that logic and curiosity might trump their relative powerlessness outside of the artificial world of the school. Similiarly, the biblical studies folks know their authority is severely limited outside of academia. Outside of it, free thought reigns. Can’t have that. The existing pecking order must be maintained, at least temporarily. In the locker you go.

In a perfect world, biblical studies academics would lay open the question of a historical Jesus. But in order to do that they would have to open their minds. And if you think the average person has trouble with that little task… well. It’s not a question of a threat to existence of the discipline. Opening up the question would doubtlessly lead to an explosion of relevant literature. It would be good for the field, showcasing at last a bit of historical respectability.

But the possibility is a clear a threat to individual egos – which is why I think the jock-bully comparison is apt. There is nothing more fragile than a bully’s ego. It has to be constantly fluffed and pampered like Donald Trump’s psuedo-hair, otherwise it falls apart. Why? Because, ultimately, there isn’t much under the combover. There is no defense for a historical Jesus that doesn’t special plead Christian sources – which brings me to my favorite example.

Exhibit #3 – The Book of Mormon

The non-Mormon academic consensus is that Joseph Smith, the founder of Mormonism, was a fraud. The Book of Mormon was not written from golden plates handed over by the angel
Moroni, but cobbled together from 19th century mythicism and the KJV. The jocks are very clear about this.

However, there is another body of academics that call themselves experts on the Book of Mormon – and they are all Mormons. They have all kinds of arguments supporting the authentic nature of the text, including sworn eyewitness statements – the famous “Three” and “Eight” – to the existence of the golden plates, literary analysis showing its originality (check out Orson Scott Card’s defense sometime – it’s fascinatingly doltish).

So there is a problem here, namely that there is more historical evidence for the inspired composition of the Book of Mormon than there is for Jesus – despite the fact that the form of the offered evidence – multiple “eyewitnesses” – is basically the same. And yet the mainstream historicans make quick sport of Smith, and defend Jesus’s historicity to the death.

How, do you wonder, can they expose as a fraud the recent formation of a religion so easily, but secure certain historicity for someone supposedly dead for nearly two thousand years for which we have no reliable non-Christian attestation?

The reason the dice keep coming up seven and eleven is not the incredible luck of biblical studies. It’s because the dice are loaded. And if you point this out? Well, the majority of academics support X. Back in the locker, you.

One more thing.

Another quality I have noticed in quality scholars, as opposed to average academics, is that they almost never defend anything. Instead, they assault. It might be an unexplored area, or an old position or subject has been neglected, or a trend that has spiraled out of control – but they are always aggressive, constantly stalking and pouncing like half-starved tigers, relentlessly seeking improved understanding.

Playing defense is, after all, the slow death of anything resembling intellectualism. You trade a life of seeking new ideas and understanding in for the apologetic goal of preserving the beliefs of the past, usually in exchange for minor power of some sort – employment, tenure,  social respectibility, money – the usual earthly rewards. Maybe you get paid in spirtual coin, but either way, sounds like a devil’s bargain for me.

But what do I know? I’m just an English professor, of questionable sanity, and probably deny the Holocaust in my spare time. My arguments couldn’t possibly have any merit. I’m a member of the lunatic fringe – a crackpot, a vertifable crank, a babbling child talking of adult things he couldn’t possibly comprehend.

And that is how the bandwagon fallacy is essentially the ad hominem fallacy in another guise; by elevating the group, it savages the individual. This is why it deserves the fiercest opposition we can muster.

On Fire

It is March 6 and I have sent out two articles for review this semester already. I have another draft nearly ready, due to send out April 2, and another April 15. With December’s piece still out, I will likely have five articles under review by April. That is a new record. My plan to front-load the writing this year in anticipation of Baby #2 is working very well.

Trying to get in as much Kingdom Come: Deliverance as possible after L goes to bed – it’s a great game.