Are We To Blame For Trump?

As I write in October 2018, criticism of Donald Trump’s competence and respect for the law as President of the United States has ceased to be a partisan affair and has become a duty of the citizenry. But all he is is a symptom, I’d argue, of a larger problem. From my perspective as an university professor, colleges haven’t been successful enough at iberal arts education in the last 40-50 years to prevent a Trump-like political event.

Consider these numbers.

First, 50% of voters in the 2016 exit polls claimed a college degree or higher, with another 32% “some college.” Pew has corrected this to 37% of voters having degrees. Either is higher than the national average of college degree holders, which is 33.4% as of 2016. Overall, 39% of registered Democrats have degrees and 31% have some college experience; 28% of registered Republicans have degrees, with another 35% having some college experience. Thus, I submit that less than one-third of the electorate had no college experience, one-third had some, and one-third graduated. I further suggest, then, that the majority of voters had encountered the basic required curriculum of any college, including a composition/writing course like the ones I teach.

Second, according to Pew, among white voters with a college degree, Clinton took 55% to Trump’s 38%, with initial exit polls claiming the reverse of Trump winning 49%-45%. Overall, among all college graduates, Clinton took 52% and Trump 42%, with a gender split among whites: white women with degrees, Clinton 51%, Trump 41%, and white men with degrees, Trump 53%, Clinton 39%. I cannot find numbers on non-white degree holders. I find these numbers incredible, whether or not you favor Pew or the exit polls.

Third, the default explanation that Trump voters were left behind economically is partially mistaken; rather, “growing domestic racial diversity and globalization contributed to a sense that white Americans are under siege by these engines of change” – a polite way of saying those same voters tended to be (but were not necessarily) racist, anti-immigrant, and isolationist.

Fourth, there were about 18 million college-degree-holding Trump voters; my estimate based on 36% of degreed voters being affiliated with the GOP. In accordance with the third point, they tended to view diversity as threatening, immigrants with fear, and their culture – predominately white – as under siege.

These four points form prima facie evidence that college, as the supposed champion of critical thinking and citizenship, has been a crapshoot for fostering critical thinking or citizenship. If those core courses, like composition, had reliably done the citizen-building job that they claimed to do, the degree holders voting for Trump would much be closer to zero. This failure is more apparent when factoring in the millions of graduates that did not vote at all. Turnout for college-educated citizens was about 70% and post-graduate was 80%.

Writing classrooms in 2016 were not the lone culprit, of course; this was a failure to vote against an authoritarian candidate that has its deep origins in previous decades, as most degree recipients got their degrees many years ago. Still, past Republican candidates – Romney, McCain, Dole, the Bushes, Reagan, McCain – were all moderates, worthy of some democratic consideration, compared to Trump’s odious strongman.

I could blame history or philosophy or political science – how can one get a post-WWII college degree without knowing that electing an authoritarian demagogue is undesirable? But no. Few undergraduates take many courses from these disciplines, but exposure to composition is almost guaranteed. My discipline must share some blame, too. We could have done more.

I used to think my teaching was formative of critical thinking and ethics and built at least a motte and bailey defense against the worst excesses. Writing needed teaching to all comers as a communicative civil right. All that seems dangerously stupid now. Increased writing skill does not magically lead to responsible citizenship. If you knew 42% of your composition class was going to note your citizen-building pedagogy and vote for Donald Trump, would you not change your strategy? Or would you “do your job” to “teach writing” like thousands of others, especially as an adjunct or lecturer if you did not have a reasonably secure job or control over your curriculum?

Repeatedly, we have thrown the difficult and lengthy task of teaching skilled writing to instructors that were underprepared, underpaid, and overworked. When we surrendered collectively and unconditionally to the conclusion that the task was not important enough for the best trained, best paid, and best-motivated instructors – who got to become “scholars” with minor teaching responsibilities – that was when the seeds were planted. Now the entire country pays for our neglect; a constitutional crisis that makes Nixon look like a paragon of integrity. If we could have taught just 1% more responsibility – just 1% – Trump would not be president.

Facing our miserable 58% showing (and I refuse to count those college degree holders that didn’t vote for Trump but didn’t vote; that’s sin by omission), we could salvage our idealistic faith in citizenship-building with a dose of realpolitik. Yes, I have the glimmerings of a solution. Still working it out, but I think writing classrooms, at least, need to explore and learn the techniques of the direct opposites of “ethical” citizenship – falsehood, obfuscation, and emotion. Our link between the teaching of writing and the promotion of citizenship has clearly failed to prevent the development of  “anti-citizens” that willfully voted in a demagogue without critical reflection as voting one into the highest office of the land undercuts the purpose of the system.We could teach writing as a neutral tool used for good, evil, and all the gray points in between, as much as we did the practice of democracy and the performance of citizenship – a marriage of realpolitik and idealism. We could study how to compose “unethical” communication through not just the increasingly prevalent examples, but practice, and thus stress the real-world consequences of rhetoric and writing used for nefarious purposes, particularly in civic/political contexts, using the lessons of history – and starting with Trump as Bad Example #1. We have to stress the consequences of dishonest communication and condemn them when we see them.

Or, is it too late? Have we bled out from a self-inflicted wound, and my musings here are part of the last flickers of a dying brain? Certainly, waiting passively for Robert Mueller to save America is a losing bet. The poison has settled in, and the problem is now long-term. Behind Trump is Pence, and behind Pence are other emboldened strongmen, many overseas in parallel tracks. Times are dire.

A college education, on the front lines of voting, may be the best hope for holding the democratic line, but blind idealism, our old pedagogical strategy, is not enough in the face of an evil that conceals its true nature all too well. There are many “anti-citizens” out there that think Trump is the second coming. Lower taxes, reduced immigration, tough trade talk, white male Supreme Court justices, racism and sexism carefully enshrined – all the little things they want, and at what they think is a great price, their souls bundled with the future.

You may note that I used the word evil. I did so purposefully. This is a path of evil we’re on. The election of Trump in 2016 was not a blip. It was a game-changer, a culmination of decades of poor education and careful politicking. Whatever happens in the midterms next month, even a Democratic takeover of both the House and the Senate, will not reverse it. It takes decades to make this kind of mess, and it will take decades to change it. I wonder, though, if we have decades left.

In Defense of Cheap Rhetoric

I agree with Meghan McCain’s recent eulogy of her father on whether Donald Trump partakes of “cheap rhetoric.” I would go further, though, and say his style is the cheapest kind of cheap. But I am also compelled, as an academic that studies rhetoric, to defend the word ‘rhetoric’ and even ‘cheap,’ when used to describe rhetoric.

Rhetoric as a word comes from ancient Athens, where philosophers such Plato and his student Aristotle, among many others, were deeply interested in how Athens’s democracy functioned through public persuasion, which they called ‘rhetoric.’ It had a bad name then, too, as empty and deceptive discourse, but some, such as the philosopher Isocrates, thought skill at rhetoric was at the very core of being a citizen. After all, it is hard to govern, especially in a citizen-state like Athens that chose its public officials by lot (random, essentially) if you cannot get people to accept your positions and ideas.

Aristotle recognized rhetoric as happening only in specific venues such as the assembly (Athens’s thousands-strong forerunner of our Congress), jury trials, and the eve of battle. Most rhetoric and communication scholars today have expanded upon these categories, though, and subscribe to some version of a “big rhetoric” concept, which states all communication is rhetorical, or, in other words, persuasive, down to the simplest “hello” or “how are you doing?” asked in public. Every instance of communication, according to this model, is trying to get its audience to do something, even if just to pay attention and accepting what the speaker or writer is saying to think is important. Modern advertising is probably the easiest to understand manifestation of this idea. Rhetoric is always a curator, selecting and deciding what to present.

But we do not need the “big rhetoric” perspective to see Meghan McCain is a wielder of rhetoric herself, and her powerful eulogy is a great example of what rhetoricians like myself use a ten-dollar Greek word to describe, epideictic, a ceremonial rhetoric that “praises or blames” at occasions like funerals or church services. An epideictic speech like a eulogy celebrates the values we hold and assaults the ones we detest. Mrs. McCain does both with her carefully chosen words. The subject of her blame is obvious.

Watch the video of her speech online. Her rhetoric is not cheap. Like her father, who had earned a massive amount of ethos – a Greek word that blends character and reputation – through his biography and statesmanship, Mrs. McCain has her own powerful ethos simply as a daughter who has lost a father.

I could mention the rhetorical techniques she used in her speech – her use of repetition to stress McCain’s fatherhood, her impassioned delivery, her stinging rebuke of Trump’s lame fundamentalist slogan – but I will not. I will just mention paralipsis, the technique I just used, where I said I would not say something, but did anyway. It is one of Trump’s favorite devices – an inherently dishonest maneuver. In other words, cheap rhetoric.

Unlike Donald Trump, who has acquired everything that he has with money, John McCain, while far from poor himself, or perfect, had many qualities that cannot be purchased with money. These qualities were learned through painful experience, won through tough conflict, and expressed through measured deeds. But the one I want to mention here, which ties most closely to the Athenian idea of rhetoric, is his renowned ability to be bipartisan in Congress. Like Isocrates, who I mentioned earlier tied skill in rhetoric directly to ideal citizenship, McCain understood better than perhaps any currently serving member of Congress that government cannot function effectively without eloquence that aims to help all (or at least most) and not some.

In this political age, ideas are rarely viewed on their merits. They are automatically checked for approval against one’s party line. Whether or not the idea is helpful or not is irrelevant. Good rhetoric, in the Isocratic view, is to be employed in support of the citizenry, not in the maintenance of power. And that is the difference between John McCain and Donald Trump – one sought to help Americans and embody virtue; the other seeks to defraud Americans and embodies vice. That’s another Greek device, antithesis, of course.

Perhaps I am reading too much into Meghan McCain’s remarks. Perhaps she subscribes to the usual view of ‘rhetoric’ as always being deceptive and cheap. But her father stood for decades in Congress as an example of how rhetoric is supposed to work. So, I hope she does not think only that. And, by writing this, I hope you do not think only that either.

The good and the bad and perhaps some ugly

My losing streak for journal submissions has finally dried up. Two strong R&Rs that look promising have arrived, along with another crushing and disappointing rejection where I apparently managed to erase scholars of color, queerness, and other diverse groups. Or at least that’s what I’m told.

I remember the first time I got a competent rejection. It was in 2008, while I was finishing my doctorate. I sent a piece to Rhetoric Review and the longtime editor, Theresa Enos, sent back a short letter that more or less said, “Sorry, but there’s not enough there there.”

I thought about this, and I realized she was right. The thesis was of the “gee, this is interesting” variety and ultimately not useful to anyone. After a lengthy revision, the next journal started with what seemed to be a solid R&R but became a bait and switch with a third reviewer who thought they had found the critical flaw in my argument (they hadn’t; they just hadn’t read closely the two-page rebuttal in the middle of the piece), but it hit on the third journal. This added up to a three-year delay in publication, of course. I don’t know how folks that don’t write essentially evergreen argument manage to sustain careers.

Speaking of Rhetoric Review, until recently I didn’t know it was the first rhetcomp journal to require peer review, in the early 80s. Which makes me think – most of the classic scholarship of rhetcomp is in the 70s. I wonder.

Some professional updates.

Another rejection came in, this one for a co-authored piece.

I can’t figure out what it was rejected for, though; the editor gives no hint, and the reviewers didn’t find anything serious, just trivia. One of them even praises it in their opening (misstating the thesis, of course) like it’s an acceptance or at least an R&R, and doesn’t undercut it afterward.

So my overall reaction is… huh. Like the other two rejections I’ve gotten this year, there is little evidence they understood the argument at all. I have communicated to my co-author, a grad student, that this is not surprising, and the long process continues.

In brighter news, I just completed a very lengthy, complicated R&R that I have been sweating for months, so the same day one comes back, another goes out! There is no other productive response to rejection, especially silly ones, but persistance.

New name?

For now I have placed the ‘rhetoricalcritic’ and ‘badrhetoric’ domains so they both point to the blog. But this kind of avoids the issue – namely, should I change the name of the blog?

‘badrhetoric’ was kind of a lowest common denominator choice back in the day (2006) but now I am slightly less snarky, and I have carved out a small niche where I publish. I am tempted to just make the big switch in total. Any thoughts?

Bandwagon arguments in academia

One of my pet peeves when reading academic arguments is the persistent and lazy use of the bandwagon fallacy – i.e. “many people think X, so X is right.” Although, in this particular version, it is more along the lines of “The vast majority of qualified scholars in this subfield think X, so X is right.”

Where should I begin my critique, I wonder? That popularity is no guarantee of validity? That popular ideas deserve to be interrogated just as much as unpopular ones? That the unprofessional arrogance displayed by using this fallacy is only trumped by its stupidity? That taking such a position attempts to cut off future productive scholarship at the knees? And, perhaps finally, that using it is a sure sign of the weakness of one’s position?

Yes, this is a target-rich environment, to be sure. Let’s try some examples.

Exhibit A: “Best Practices”

If I had a nickel for everytime someone appealed to “best practices” in my semi-home field of rhetoric and composition and its sister technical communication, I would be able to take my family out to a series of nice dinners.

Behind the concept of “best practices,” it turns out, is a crude bandwagon argument. To follow “best practices” in teaching in tech comm, for example, is to use the techniques that are well attested in the scholarship, supported by “name” academics whose “names” can be dropped liberally in conversation, and that are ultimately safe and uncontroversial.

Screw that.

I don’t care if 99.9% of the members of NCTE (National Council of Teachers of English, BTW) support a given mode of instruction. I only care about whether or not it works. Show me whether or not it works – not how popular it is, or what academics happen to endorse it. Give me evidence, not sponsorship.

I have known very few real top-flight scholars in my career thus far. If they have something in common, though, it would be that none of them follow trends or take a poll before they plant a flag. The pursuit of knowledge eschews such petty and empty considerations – and so does logic. Someone dedicated to such an ideal would never use popularity as evidence of anything except popularity. Academic arguments are to be evaluated on their own merits, not on whether or not they are in season.

So, in short, while “best practices” might have once had a more innocent connotation, now it just makes me irritable. It represents the worst of academia, when it is at its pettiest – when it is political.

Exhibit B – A Historical Jesus

I’m gearing up to teach the Synoptic Problem in “Studies in Religious Texts” again, so this has been on my mind of late. One of the subtopics that naturally comes up with the SP is how much of the gospel materials are based on any historical Jesus – which then leads to whether there was a historical Jesus, and if so, what can we say about him?

“Mythicist” arguments, arguing that Jesus has no historical basis and instead is a kind of assembled myth, are as old as the hills, dating back to the first pagan critics of Christianity. I’m agnostic on the issue due to what I see as a failure of everyone writing or speaking on the matter to make a decisive case (due to the paucity of evidence in any direction) but I am frankly peeved at the standard position – that mythicism is nonsense because no mainstream biblical studies or religious studies academic thinks there wasn’t a historical Jesus.

Now, I hardly need to point out at this point in my post that such an “argument” is one big bandwagon fallacy (as well as an argument to authority, but I’ll leave that one for some other day). It is telling a questioning undergraduate to sit down and shut up, pulling rank, asserting the primacy of one’s subdiscipline, and being an arrogant twerp, all at once. These are all things I despise and oppose.

So I have a certain sympathy for the mythicists as underdogs. That doesn’t mean they are right – they still have to make a case, and so far no smoking gun has appeared – but they have a decent case that is just as strong as the default one.

So why do they get such a hostile reception? Why the flippant and repeated use of the bandwagon fallacy in response (occasionally laced with a choice insult about one’s employment prospects, educational background, and sanity)?

Well, let’s return to rhetcomp for a moment. The most telling and long-lived idea in rhetcomp is process pedagogy – the belief that writing is a “process” rather than a “product” and should be taught accordingly as a series of repeating and mutally informing steps instead of emphasizing the text that results. Now, feel free to correct me if I’m wrong, but I can’t think of a single instance of a “process” compositionist slapping down anyone who challenged or questioned process by saying, “The vast majority of composition academics support process theory. Therefore, your argument is a fringe belief and  not worthy of a full treatment.” If such a pretentious mandarin exists, please send me a citation, but I don’t think one does.

Now, at the same time, there is that old chestnut mentioned before – “best practices” – that is used instead to enforce consistency. But as it turns out, “best practices” is mostly political cover, because it can mean whatever the instructor wants it to. Composition is a field full of rugged individualists. Some are old-school grammar mavens, some are process fanatics, some are post-process theorists, and others are expressivists, and others (really most) defy easy categorization. We know how to selectively cite. Some of us resist this, of course, but not all – not even most.

Back to the historical Jesus. There is a great wiki page that has collected countless putdowns of mythicists:

https://en.m.wikiquote.org/wiki/Christ_myth_theory

(they are all down near the bottom).

Perusing them will reveal that they are basically all variants of the same technique: bandwagon fallacy + insult to education, occupation, or sanity + optional ridiculous comparison to Holocaust denial.

Why are they all the same? Why so prevelant?

First, there is no downside. Picking on mythicists is a risk-free power projection. It’s functionally no different than a bunch of jocks stuffing a nerdy kid into a locker. I have more power than you, so in the locker you go. There is no penalty.

Second, more fundamentally, the nerdy kid is a existential threat. He represents a counterargument to the jocks’ primacy – that logic and curiosity might trump their relative powerlessness outside of the artificial world of the school. Similiarly, the biblical studies folks know their authority is severely limited outside of academia. Outside of it, free thought reigns. Can’t have that. The existing pecking order must be maintained, at least temporarily. In the locker you go.

In a perfect world, biblical studies academics would lay open the question of a historical Jesus. But in order to do that they would have to open their minds. And if you think the average person has trouble with that little task… well. It’s not a question of a threat to existence of the discipline. Opening up the question would doubtlessly lead to an explosion of relevant literature. It would be good for the field, showcasing at last a bit of historical respectability.

But the possibility is a clear a threat to individual egos – which is why I think the jock-bully comparison is apt. There is nothing more fragile than a bully’s ego. It has to be constantly fluffed and pampered like Donald Trump’s psuedo-hair, otherwise it falls apart. Why? Because, ultimately, there isn’t much under the combover. There is no defense for a historical Jesus that doesn’t special plead Christian sources – which brings me to my favorite example.

Exhibit #3 – The Book of Mormon

The non-Mormon academic consensus is that Joseph Smith, the founder of Mormonism, was a fraud. The Book of Mormon was not written from golden plates handed over by the angel
Moroni, but cobbled together from 19th century mythicism and the KJV. The jocks are very clear about this.

However, there is another body of academics that call themselves experts on the Book of Mormon – and they are all Mormons. They have all kinds of arguments supporting the authentic nature of the text, including sworn eyewitness statements – the famous “Three” and “Eight” – to the existence of the golden plates, literary analysis showing its originality (check out Orson Scott Card’s defense sometime – it’s fascinatingly doltish).

So there is a problem here, namely that there is more historical evidence for the inspired composition of the Book of Mormon than there is for Jesus – despite the fact that the form of the offered evidence – multiple “eyewitnesses” – is basically the same. And yet the mainstream historicans make quick sport of Smith, and defend Jesus’s historicity to the death.

How, do you wonder, can they expose as a fraud the recent formation of a religion so easily, but secure certain historicity for someone supposedly dead for nearly two thousand years for which we have no reliable non-Christian attestation?

The reason the dice keep coming up seven and eleven is not the incredible luck of biblical studies. It’s because the dice are loaded. And if you point this out? Well, the majority of academics support X. Back in the locker, you.

One more thing.

Another quality I have noticed in quality scholars, as opposed to average academics, is that they almost never defend anything. Instead, they assault. It might be an unexplored area, or an old position or subject has been neglected, or a trend that has spiraled out of control – but they are always aggressive, constantly stalking and pouncing like half-starved tigers, relentlessly seeking improved understanding.

Playing defense is, after all, the slow death of anything resembling intellectualism. You trade a life of seeking new ideas and understanding in for the apologetic goal of preserving the beliefs of the past, usually in exchange for minor power of some sort – employment, tenure,  social respectibility, money – the usual earthly rewards. Maybe you get paid in spirtual coin, but either way, sounds like a devil’s bargain for me.

But what do I know? I’m just an English professor, of questionable sanity, and probably deny the Holocaust in my spare time. My arguments couldn’t possibly have any merit. I’m a member of the lunatic fringe – a crackpot, a vertifable crank, a babbling child talking of adult things he couldn’t possibly comprehend.

And that is how the bandwagon fallacy is essentially the ad hominem fallacy in another guise; by elevating the group, it savages the individual. This is why it deserves the fiercest opposition we can muster.

Brief Rant

I have been feeling depressed lately about my research and publishing prospects. I’ve accomplished a fair amount since I finished my Ph.D., but I don’t feel professionally or emotionally fulfilled by any of it.

I haven’t published anything since early 2016. Much of my time in between has been taken up by two articles, one which has been rejected three times by good journals despite interest, and the second of which is promising, but slow to develop.  I’m not sure what I’m doing wrong, but whatever it is, I’ve slowed down.

It would be easy to attribute this decline in production to my son Luke, who turned 2 last April. But I don’t. I generally gain strength from him. He makes me laugh.

It would also be easy to attribute this decline to the fact that I have started to write more edgy stuff in articles than in my previous pieces.

My dissertation (aside from the first chapter, which appeared in Rhetorica) remains unpublished, I have found, due to that its conclusions don’t align with contemporary Christianity or conservative biblical criticism.  I have shopped it everywhere and found no takers. I consider this a massive failure on my part, even though I know it isn’t. It’s a people problem.

To sum it up, my diss argues that pretty much the entire ‘life of Jesus’ part of the Gospel of Mark (everything beside the Passion narrative – the arrest and the crucifixion) is a work of rhetorical fiction. This means Judas is a fictional character inserted for drama, John the Baptist (while a real person!)  never had anything to do with Jesus, and all of the post-resurrection appearances are late additions. Those three observations are chapters. Ultimately, I hold the gospels are not four buttressed eyewitness accounts, but competing fictional narratives as they openly plagarize each other in a quest to control the Jesus narrative – which was created by the author of Mark in the first place!

In retrospect I should have seen the problem, though – it threatens too many people. Even if I point to all the form criticism that basically spells it all out, it doesn’t matter. It’s too edgy, even though I find it to be remarkably commonsensical.  I wonder, though, if I should try to build up to it through a series of smaller articles. I have only toyed with sending out the individual chapters. Chapter 1 found a home, but only after many years.

Anyway.

There is also my half-secret hobby as a novelist. I have written three larger works of fiction. The first was about 60,000 words and what I would call today fan fiction. Practice. The second was 190,000 words, much better, had an agent, nothing happened. Self-published, which was a mistake, back in 2003. Very few readers. Bummed me out for over a decade. I’ve read far worse, so that’s another disappointment.

Two years ago, though, I wrote another, about 80,000 words. Thought I had a winner. Sent query letters to over 100 agents. No bites. Abandoned the project. Then I started writing a sequel, which was odd behavior, even for me. I felt like the characters could have another go. This has made me think that I should approach publishers directly. But I feel frozen by the likely outcome.

I think I’ve been burned too much. There is only so much negativity that I can bear and it’s starting to wear. I need a win occasionally to justify continued effort. I just don’t know right now where I’m going to get one.  I have a lot of germinal article ideas, but there are so many that it’s hard to pick just one and bang it out.

This feeling will probably pass. I just have to find a way around it.

Try Again

So I was reading this piece in the NYT: “We Aren’t Built to Live in the Moment.”

I have some problems with its theory of mind:

Your brain engages in the same sort of prospection to provide its own instant answers, which come in the form of emotions. The main purpose of emotions is to guide future behavior and moral judgments, according to researchers in a new field called prospective psychology. Emotions enable you to empathize with others by predicting their reactions. Once you imagine how both you and your colleague will feel if you turn down his invitation, you intuitively know you’d better reply, “Sure, thanks.”

Ugh. I can see why this guy is all excited about this, but he’s missing some crucial ingredients. We may be planning creatures, true, and that is important, but our ‘plans’ are made up of present judgments that come very, very quickly, and the past is constantly bubbling up to influence those present judgments. To say we are prospective creatures is to oversimpify – rather, we are present creatures with complex pasts AND futures. Ask a victim of abuse or trauma whether or not they spent all their time thinking about the future, or whether someone from a poor background and low education (that pesky past) thinks about “the future” the same as someone from a middle-class background and decent education.

Emotions are not even remotely understood, but it’s a good starting point.

This article reminds me of one of my pet peeves, which is that we are in the dark ages of understanding the brain. Let me give an example.

At one point in my research in grad school I was very interested in how people read. Not how to teach people how to read, but how reading worked in the brain. An analogy might be wanting to know how an internal combustion engine worked instead of wanting to know how to drive a car.

So I read a lot of reading psychology. I was massively disappointed. I discovered that no one in the field had more than a vague idea of how reading worked. The brain was effectively a black box to them – the input and output was known, but what happened between people’s ears – they didn’t have the foggiest. Lots of theories, no evidence. We are literally sentient beings with brains and we have little idea how our brains work, at all.

I have come to find that pretty much all research into the brain is at this state. We are not much beyond poking physical regions of the brain with fingers and electricity to discover what does what. So I look upon supposed new avenues as total shots in the dark. Again, this is the dark ages.

Ultimately I don’t see any major innovations in this area until we do the very-thinkable building of an artificial brain. THEN we will know how one works – or we won’t, because too much of what a brain does is emergent. Just wedding “emotions” onto a computer isn’t going to do it.

To leap into my field for a moment, rhetoric is largely a study of how decisions are made based not on ironclad logic, but on emotions.  When Mr. Spock says on Star Trek that “it is not logical,” he is mistaken – if he were really telling the truth, which Vulcans are supposed to, he would say, “it is not emotionally satisfying to me at the present moment.” That’s not nearly as quotable, of course, and pointing out that EVERY time Spock says he is being “logical,” he isn’t, would take me all day.

Suffice to say, it’s true we are emotional creatures, but our past influences said emotions and we also make WRONG decisions very often. Frankly, the emotions are not very good at making decisions, especially when the RIGHT answer is not blindingly obvious. Self-persuasion helps, but often turns into rationalization, like Spock and his supposed “logic” when all he really has is certain values.

 

 

Randomness and Teaching

Well, Trump won, and I suppose I will comment on that at length at some point. But I want to discuss something else.

I was thinking this morning about the randomness inherent in making decisions. Think of a path that forks left or right with no clues as to what follows  – what makes you choose left or right? SOMETHING does. Back when I knew something about programming – 1990? – we would use random number seeds based on the system clock if we needed a semi-random number. I have to wonder if the circadian system offers the brain a similar out.

That (random?) thought said, I am a general fan of randomness when teaching. I don’t have a lot of formal structure, usually, other than a vague ‘we are going to discuss X,’ or ‘we are going to do this exercise together to master Y,’ or ‘we are going to play a game in order to learn Z’.  I leave the creation of teachable moments to chance; I figure the friction created by me, the students, and the material rubbing together is going to create sparks that I can then turn into a fire. Once I have a fire going, then the class takes on a life of its own and all I have to do is enjoy the heat.

I do prepare graduate courses differently than undergraduate ones, though. I waltz into undergraduate ones and lecture extemporaneously as know the material really well.  For graduate classes, even though I still know the material, I usually prepare a page or two of bullet points and questions that I want to hit. It’s more of an emergency blanket; if the class discussion slows or meanders, I have my page to lean on to restart things.

So I guess what I’m trying to get at is that I rely on a certain degree of unpredictability when teaching. I make a lot of teaching decisions on the fly and instinctively rather than planning them out. Planning is valuable, and I sometimes do a fair amount of it, but it’s become necessary for me over the years to react quickly to conditions in a class.

FYC is different, though (I mostly teach upper-division PW and rhetoric). FYC students need structure; my more freewheelin’ style doesn’t mesh well with freshmen that come to the course feeling lost technically, socially, and materially. They don’t do well with abstract thought or ethical dilemmas. They don’t necessarily know how to answer, or ask, good questions. They are not as comfortable with ambiguity as I am. So I have to adjust and break the course into discrete, predictable units. It doesn’t please my personality, but adaptability – even random – is the essence, I think, of decent teaching.

The impossible task

One of the more interesting things that I noticed when I first started studying rhetorical theory is that some rhetorical situations are impossible tasks. Everyone, I think, at one time or another, has encountered an audience that cannot,  or, rather, would not, be moved.

The distinction between ‘cannot’ and ‘would not’ is important; if an audience cannot be moved – if there is some gulf of values that somehow cannot be crossed by any conceivable method  – then that is one thing, to say that rhetorical power has limits.  But if an audience refuses motion – if it chooses not to move when it could have – that implies something else, that namely, the audience has all the real power, and we should speak less of rhetorical power and more about audience power. Rhetoric becomes more of a curious byproduct – a residue of an interaction – than a means to an end.

So if audiences can choose not to be moved, all rhetorical situations are impossible tasks. People cannot be persuaded – rather they choose to persuade themselves in the light of certain situations or stimuli.

Where does this place the so-called persuasive speaker, the charismatic, the leader? Obviously some people can move others and are demonstrably better at it than others, right? So I think that the power to refuse movement is present but not always used, comparatively. It would require a mechanism that is the reverse of cognitive dissonance; that is to say, instead of rationalization in the face of dissonant input, there is an resistance to information that does make sense to the listener – an unwillingness to move, to listen, to process. I may be equivocating between “dissonant input” and “makes sense”