All posts in category Education
Posted by Andrew Neuendorf on March 9, 2017
1. Don’t fear the Apocalypse. Emily Dickinson knew she was in the presence of poetry when she could feel physically the top of her head coming off. This was not a metaphor. It was a mystical experience. Incidentally, the word “apocalypse” (from the Greek apokaluptein) means “revelation,” and more specifically “to uncover,” as in lifting the top of a box. Surprise! It’s a present! Let’s not fear the apocalypse, but instead prepare our education system for transformation. Instead of exams, let’s assess them on whether or not the tops of their heads come off. Education should be about running a more complex, subtle operating system.
2. Suck less. There is this guy with a tug-boat who hauls icebergs from the North Pole to the Middle East to provide fresh water for billions. His job is easier than getting students to learn and getting teachers to teach. You can tug an iceberg to a desert and everyone will drink, but you can’t lead anyone to learning. I have yet to see a so-called education reformer address the fundamental problem with the education system, which is, put succinctly: school sucks. Boredom is the main currency of education, exchanged fluidly between teachers and pupils.
3. Create a new center. Where is the Andy Warhol of education reform, charging way out in front of the generals, the avant-garde? It was Warhol who created a new center out in the margins. He built a new camp that at first looked foolish, laughable, but soon became the new center. Prophecy. Then, after some time, his work became the status quo, until…look…here comes another Andy!
4. Be Useless. In The Idea of a University, Cardinal John Henry Newman creates a distinction between useful and useless knowledge, and then sides mainly with the latter. The Liberal Arts are the useless arts and, therefore, supremely useful. The merely useful fields of study are definitely useful, make no mistake, but they are not nearly useless enough. Chuang Tzu knew this, and so favored the disabled and crooked trees, and generally preferred to drag his tail in the mud rather than coming to court with sage advice for the king. Too few sages make the difficult decision to be useless. Too many decide to be useful, to claim a role in the established drama. Watch out for anyone chasing his destiny, submitting to fate, or following his dreams! Too often people dream of being useful. What’s the use in that? The earth, to pick one example, is completely useless. It doesn’t do anything. It plays a non-zero-sum game, and, even better, it’s totally unaware of itself, or at least can’t be bothered to submit the proper reports. The earth doesn’t care. It treats humanity like a straw dog. It does nothing, endlessly. See that oh-so-exquisite school of fish circling the coral? It dissipates, and then reconstitutes itself into various, ever-changing patterns. Constant adjustment, constant beauty, constant change. This is what we should be teaching our children: how to make beautiful schools. Of course, this requires rules and hard work. But mostly it means being useless and doing nothing.
5. End grades. If we treat students like rubrics, don’t be surprised if all they care about is grades, or, worse yet, don’t care about grades at all. The best students and the worst students are the ones who don’t care about grades. Students are not percentages, points, letters; they are not dollar signs, checked or unchecked boxes on rubrics. They are whole people and will respond as such if you treat them accordingly. A rubric is for a mechanic. This is what’s wrong with your car. This checks out okay. Transaction complete. Let me top off your fluid. If creating life-long learners is what we’re after, then why do we care so much if they get it right at the end of each three-month block? Let’s measure them in thirty years. See how well we did. Assess this: Dharma burning through Karma. Or, “We’ll change your brain, or your money back!” MRI instead of final exam. Replace the scantron with the brain scan.There are no grades in reality. There is only practice. The world is practice. God is practicing right damn now. Hey, Shakespeare, you forgot to finish that subplot with Polonius spying on Laertes in Paris. Minus 10 points on your little Hamlet play. Also, your main character has too many contradictions. Was he insane? Was he faking? It’s really unclear. Plus, I’m pretty sure you plagiarized, Shakespeare. I saw you looking over little Thomas Kyd’s shoulder.
6. Destroy Departments; Kill Majors. The new schools should soften all boundaries between genres, subjects, majors, departments, and degrees and instead orient student energy around direct action, creation, and experiment. The only reform necessary is a release and redistribution of energy. (Education reform! Ha! Was it ever formed to begin with?) The ever-shrinking art, music, physical education problem solved: do them all at once: climb and swing from ropes to splatter paint while listening to music and recording audio and video to edit into a film later. Or else we do all school work while walking 2.2. miles-per-hour on treadmills, ala Brain Rules by John Medina. Walking and writing. Perfect. Word art! Large scale installation art work made of language, maybe heavy-lifting in there, too. Let’s throw all subjects together! Science and Home Economics and History, study the chemical composition of food and the history and culture of dishes and cuisines. History, Literature, Religion, Philosophy, Psychology, Astrobiology, Evolution….these are not separate subjects. Never could be. The inventor of the concept of “bits” thought of himself as neither physicist nor engineer. The writings of Emerson are neither essays, sermons, or in line with normative categories of literature we might use to partition a syllabus: poem, play, fiction, non-fiction. What was Teilhard deChardin writing? You might find him in the bookstore under philosophy, religion, paleontology? Joseph Campbell? Marshall McLuhan? Bucky Fuller? There is nothing liberal about partitioning knowledge into categories or majors. The globe cannot be divided into majors and minors, so neither can its consciousness. The university is the globe’s consciousness. Not, “What’s your major?” but what are you working on, thinking about, advocating, becoming? Not, “Where are you from?” but “Who are you now?” In order to change schools, you would have to change yourself, and no one wants that. Socrates, at the beginning of Western Education, said, “Know Thyself!” and still, we do not listen.
7. No classrooms! Learning is the goal. Who cares the vehicle? As soon as you set the times for a class period, you kill learning, which does not occur in 50 minutes chunks at the appointed time. In school, out of school. In class, after class. Such ridiculous boundaries. Education has a design problem. Create whole learning environments, entire learning communities (not just like two classes jammed together for 6 credits.) I mean a whole learning world. Does the Internet exist? I mean, if the internet is everywhere, it is nowhere. It just is. If it’s in our cars, phones, brains, then it is an extension of life as we know it. Same for education, same then doubly of online education. It should be called just education, and then, not even that. There is no classroom, never was—don’t go to class—you are the classroom, the pupil, the teacher, the world, the universe, basic human consciousness is the university. The university is nowhere and everywhere or else its center is everywhere and circumference nowhere. I forget which one.
8. No more hoops and papers! Jump through the hoop! Get the piece of paper! No, let’s paper over the hoop and at least make them crash through it. Or shrink the hoop! Maybe expand its circumference beyond detection. Make the center of the hoop everywhere, the circumference nowhere! If you get your piece of paper, you will be prepared, at least, for the coming fascist onslaught. (Show me your papers!) If the paper is what matters, than the trappings of education matter. The book itself matters more than the content, more than the act of reading. Book as bludgeoning device. There is no teachable moment, only one continuous mistake. Shikanza, shikanza. Your assignment for next time: Build a new planet from scratch with your hands.
9. Charter for a New University (Based on Mirra Alfassa’s Auroville Charter)
—The university belongs to nobody in particular. It belongs to humanity as a whole. But to live in the University, one must be the willing servitor of the Global Consciousness.
—The university will be the place of an unending education, of constant progress and a youth that never ages.
—The university wants to be the bridge between the past and the future. Taking advantage of all discoveries from without and from within, the University will boldly spring toward the future realization.
—The University will be a site of material and spiritual research for a living embodiment of an actual human unity.
Posted by Andrew Neuendorf on January 22, 2014
Brief summary of Parts 1 and 2:
In Part 1, I revisited a puzzling 1968 study (supported by follow-up articles) that concluded teaching method has no impact on final exam performance. In Part 2, I explored a possible conclusion of these results: we should focus on learning, not teaching, and we should accept student effort and study time as the main difference makers in exam performance. This doesn’t mean that teachers do not matter. Instead, it means that pedagogy is less important than whether or not learning is actually happening. And, as I suggest in Part 2, cognitive research may help us learn how to learn.
Solving the Teaching-Learning Paradox
In some respects, the results of The Teaching-Learning Paradox are a miracle. Regardless of teaching method, despite the circumstances before them, no matter the quality of teacher or program, the students learned, and they demonstrated their learning consistently across time. On one level, the results of “No Difference” seem nihilistic. Nothing we do will matter! On another level, it reveals the power of student effort and learning. No matter what constraint, strategy, or limitation thrown at them, the students learned! And they performed the same each time. The students who wanted to put forth the effort and earn their “A’s” did so, in each study, regardless of method. No matter what you do to stop them, learning will happen! As Dr. Malcolm in Jurassic Park proclaims, “Life finds a way.”
So how can teachers teach in support of learning? In Jurassic Park, scientists alter DNA in order to breed females exclusively and restrict natural reproduction. Life finds a way (via mutations or some rare gender-jumping frog DNA) around these limitations, and natural reproduction goes on. Once we start to look at the results of The Teaching-Learning Paradox this way, we should see the results as positive. Student learning will not be stopped.
However, instead of designing classrooms as obstacles to learning, we should find a way to support the learning that wants to occur naturally.
Attention, Memory, Intensity
I do not have too many answers here, but I will introduce three interrelated terms as a start: Attention, Memory, and Intensity.
Daniel Willingham’s book Why Don’t Students Like School? provides a useful model for how the mind retains factual information. He’s careful to point out that factual information is not sufficient for learning, but is often the basis for real learning, since in order to apply concepts and make creative evaluations, we must have a database of knowledge from which to pull and make connections. His simple model of how memory works presents educators with an opportunity to figure out where the bad connections might be in teacher-student communication:
Willingham argues that in order for data from the environment to make the transition from temporary working memory into (somewhat) permanent long-term memory two things need to happen: 1) said data must be subject to an intense level of attention while it is readily available in working memory, and 2) content previously stored in long-term memory must be pulled up and matched with the new data in order for it to permanently “stick.”
I would then couple this model with John Medina’s argument in Brain Rules that we most effectively store information in our memory when it is organized in a “top-down” manner. That is, instead of imagining a long list of vocabulary terms to be crammed into the brain like a scroll of paper inserted into a shredder, we should group and “chunk” information into concept-category that are big enough to hold a lot of related knowledge. Think of the mind as a series of drawers (I like the phrase “junk drawer”). One drawer might be “Romanticism.” (I have a drawer like that.) Once that drawer has been established and the key concepts and terms defined, you can open it up whenever you want and dump more stuff in. Then, when you want to remember something related to it, you don’t have to go rifling through all of your junk. Just pull out the Romanticism drawer. Items in that drawer tend to stick together and have interrelated functions.
Both Willingham and Medina stress the importance of getting students to pay attention, not simply to be nodding and following along, but to pay attention in a particular kind of way. Willingham thinks the most effective kind of attention is when students are thinking about particular and important meaning. This echoes the “Why” study I discussed in Part 2, but also Medina’s concept-containers. I think all of this connects back to intensity. Ideas and meaning matter. They also require more thought (again, as in the “why” study in Part 2), force the brain to construct its own connections, and to “turn on.”
So much of teaching comes back to the construction of analogies. When you’re trying to make a new concept “stick” you should look for some piece of knowledge already stored in the students’ long-term memories and compare the new concept to it. They need some sort of connection. This is also an opportunity for them to construct their own analogies, thereby activating their brains. Similar to asking “Why do you suppose that is?” You can ask, “Does this idea sound like anything you’ve heard before?”
One final note on “intensity.” Students do not pay attention when they’re bored. Sometimes this can’t be helped. As your mother used to say, “Only boring people get bored.” Some students would be bored watching live footage of an alien invasion on CNN. Don’t worry about it. Also, we should avoid thinking of education as entertainment, and we should avoid trying to be too “current” and “hip” in an effort to relate to what we think the students are into. Usually we’re wrong anyway, and then we come across as condescending ninnies.
However, we should not underestimate how tired and distracted and typically underwhelmed students can be. Sometimes this is their fault, and sometimes it’s not. I think students are desperate for reasons to pay attention. Be intense. Be interesting. Be funny, if possible, but be all of those things in service of learning. Many students just need to see that what you’re teaching is worth getting excited about.
Posted by Andrew Neuendorf on January 2, 2014
If the conclusions in Part 1 are correct, it does not really matter which teaching method instructors use. Lecture? Small-group? Discussion? Tutorials? Online? Some combination of these? It makes no difference. Student outcomes on final examinations will be the same.
Perhaps this conclusion is way too obvious. Why should something as superficial as the physical arrangement of the room and the shuffling around of its human components have any meaningful impact on learning?
If I’m being honest, small-group work has always struck me as a gimmick hashed-out during some administrative retreat, and whenever I employ it, I always feel like I’m appealing to some newfangled rule book in order to check an activity off of a list.
It’s pedagogical hokey-pokey.
But this is too harsh. And I’m ignoring my own conclusions from Part 1. It is not the teaching method, per say, that affects learning. If small-group work is a gimmick, then so is lecturing.
The reason that The Teaching-Learning Paradox concluded that teaching method has no impact on learning is simple: only learning has an impact on learning.
Cue the sound of a million minds being blown.
Here it is again, in case you missed it: Only learning has an impact on learning.
(I know it sounds like a poorly edited bumper sticker, but stay with me.)
We like to think of learning as social, as shared, as something that happens together in collective spaces and that can be facilitated by arrangements supporting a diversity of human interactions, and we tend to preference intimate, close-knit combos in which we assume the exchange of ideas and reflections will be more fluid and will lead to comprehensive understandings that far exceed what one individual could think up alone sitting in a cold lecture hall listening to someone drone on about stagflation.
But there isn’t anything magical about these arrangements in-and-of themselves. And I suspect that they are just as likely to reinforce bad learning as they are to support it. I hate to use this cliché, but it seems appropriate: if the classroom is not centered on authentic learning, shifting into small group discussion is merely rearranging deck chairs on the Titanic.
So what is authentic learning? I don’t know, but I would like to suggest some conclusions reached by cognitive science, in particular the research that centers on how we learn. Ultimately, learning must take place in the brain, whether that brain is sitting in a lecture hall, in a small group, or in front of a computer screen. We have more knowledge now of how the brain learns, and instructors and students should take advantage of this knowledge to increase learning.
It seems that, because it was published in 1968, the conclusions of The Teaching-Learning Paradox could not take cognitive strategies into account. Furthermore, since their focus was on various teaching arrangements (and not on styles, strategies, and tactics*), they were unable to analyze the actual infrastructure of learning. This supports the criticism put forth by Ten Cate and others (see Part 1) that Dubin and Taveggia are not really measuring a dependent variable by comparing final exam scores. Also, what is the true effect on the student? Certainly, one key component of learning is the demonstration of content acquisition, but are the studies in The Teaching-Learning Paradox merely testing for how efficiently the knowledge provided by the instructor slid from his mouth and onto the final exam page? Seen in that light, every study that Dubin and Taveggia analyzed was a success. The knowledge did transfer, and all at the same rate.
But what about the students? Did they learn? Really, how would you expect to measure learning if you’re merely testing how the classroom is structured. You are not testing learning or learning strategies. You are, instead, testing physical arrangements. And if the instructor and the students are not employing strategies that facilitate learning, then what is the value of the study?
Fortunately, a lot of work has been done with authentic learning strategies based on cognitive research. And in order to test these strategies, you can look at studies that are not too dissimilar from the ones Dubin and Taveggia analyzed in The Teaching-Learning Paradox. Do classrooms that employ cognitive strategies result in greater learning on final examinations? (Later we’ll address the shortcomings of solely focusing on content knowledge and memorization, but, as many cognitive scientists note, memorization is a critical component to learning, though not sufficient in-and-of itself).
This new focus on learning strategies will be a true student-centered approach, since the goal will be to figure out how students actually learn** (how the brain actually works) and to design curricula, lectures, assignments, and study sessions with this knowledge in mind. And since, as Dubin and Taveggia point out, studying is a measureable difference-maker, teachers should find ways to facilitate student learning by teaching effective strategies, delivering content with the cognitive “tricks” in mind, and motivating students to take their learning into their own hands, hearts, and minds. (Is that a Girl Scout motto?)
What are these cognitive “tricks?”
Well, let’s take one. It doesn’t matter if you’re tutoring someone one-on-one, standing in front of a lecture hall filled with 500 students, or sitting in a circle with 15 students, if you’re not asking “Why?” your students aren’t learning.
Dunlosky, et al. present ten learning techniques based on cognitive research in a 2013 article titled “Improving Students’ Learning with Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology.” Each technique is summarized and bolstered with supporting studies. The first, “elaborative interrogation” is essentially a fancy way of asking students “Why?” and encouraging them to construct their own answers. You might present some content on (drawing from my own experience) the correlation between the increase in sophisticated literature during the so-called American Renaissance period and the rise of American printing companies. “Why do you suppose literature became more sophisticated as more printing companies arose?” Or, in a Mythology course, you could have students read creation myths from several different cultures and ask them “Why do so many of these cultures portray female goddesses as earthbound and male gods as living in the sky?” The idea is that learners must now actively construct new knowledge in response to the question, and in doing so, retrieve existing knowledge in order to fill in the gaps: “The prevailing theoretical account of elaborative-interrogation effects is that elaborative interrogation enhances learning by supporting the integration of new information with existing prior knowledge” leading to insights that are “self-generated rather than provided” (8). More learning is actually occurring in the learner’s brain as they struggle to develop an answer, and not just as part of a transfer from the instructor to the pages of the final exam. The authors cite one study where “Why” questions were integrated into a biology text, resulting in higher exam scores compared to a control group.
This is true learner-centered, active learning, not the kind that is staged by simply rearranging the classroom. The interesting thing about the above study is that the vast majority of their strategies are student studying strategies, some involving annotation of texts, pacing of study sessions, and mnemonic devices. The idea is to empower students to turn their own brains on to better learning strategies. We want lifelong learners who learn to do things for themselves. We want more of the learning to be happening in their own brains.
I will discuss more examples of cognitive strategies in futures posts, but I’d like to close with a few words from education theorist Bill McKeachie who discussed the “learning” approach to teaching in a 2008 interview in Teaching of Psychology:
We also found that when students thought more about the material, they were likely to become more intrinsically motivated and interested in the material for its own sake rather than just to pass the test. That shift in focus makes a big difference. If teachers are interested in helping students learn for the rest of their lives, then they should want their students to develop intrinsic motivation for learning and not just learn when they are told to learn because they are going to be tested on it.
A couple of things here: First, next time I’ll discuss the work of Daniel Willingham (who is also one of the authors of the study I cite above) who echoes McKeachie’s deceptively simple call for getting students to think about the material. It sounds obvious. However, if you want to make something stick in the brain, you first need to figure out how to make it sticky.
Second, McKeachie’s emphasis on intrinsic motivation could not be more important. And this is exactly what Dubin and Taveggia’s The Teaching-Learning Paradox cannot measure. In fact, I don’t think intrinsic motivation can be measured quantitatively. This is what prompted B.F. Skinner to disagree with cognitive research, since it could not be observed as behavior could. (McKeachie recounts their disagreement in the interview.)
Dubin and Taveggia could, however, (in limited samples) measure the effect of studying, and found it to be positive for test results. If we can help students become intrinsically motivated, get them to pay closer and deeper attention (to think about the material, in Willingham’s and McKeachie’s formulation, which means to take advantages of cognitive tricks that line up with how the brain actually learns), arm them with proven study skills, and structure class sessions with the cognitive research in mind, measurable improvements might be recorded. If students need to learn how to learn, then teachers need to teach them how to learn.
In short: if you want learning, you have to teach learning.
*In Part 1, I suggested a difference between method and style. I’m going to alter this. I like the word “arrangements” better to describe Dubin and Taveggia’s focus on comparing lectures, discussion, one-on-one sessions and other such organizational methods. For now, at least, “styles, strategies, and tactics” will be a stand-in for a discussion of approaches to authentic learning that can occur irrespective of classroom arrangements.
**Next time I’ll cover Daniel Willingham’s claim that learning styles, as they have been propagated, do not exist.
Posted by Andrew Neuendorf on December 19, 2013
A simple syllogism to begin:
1. All people are ideologues*.
2. Teachers are people.
3. You know what goes here.
If you reject the premise, you probably want to stop reading. You are the problem.
If you accept the premise, you also probably want to stop reading. Some unpleasantness flows from it.
There is a secret that snakes through the history of education research. In fact, it’s not even a snake. A snake could easily slip into the rushes and go unnoticed. What I’m writing about looks more like a roaring springtime river bloated with snowmelt. Don’t fall in.
In their 1968 study The Teaching-Learning Paradox: A Comparative Analysis of College Teaching Methods, Robert Dubin and Thomas C. Taveggia analyze 40 years of research comparing the effectiveness of a range of college teaching methods, including lecture, discussion, tutorials, independent study, small group work, and TV courses (1968’s equivalent of online education). Their book can be summarized in two words: “No Difference.”
Dubin and Taveggia poured over the data of nearly 100 studies that compare teaching methods by using final examinations as dependent variables (a potential weakness I’ll discuss later). What they found should be disturbing to any instructor who has ever flown the flag for a particular teaching method, for example, favoring small group work over lectures.**
They repeat their conclusions frequently throughout the 86-page study, anticipating, rightly, that no one would listen:
In the foregoing paragraphs we have reported the results of a reanalysis of the data from 91 comparative studies of college teaching technologies conducted between 1924 and 1965. These data demonstrate clearly and unequivocally that there is no measurable difference among truly distinctive methods of college instruction when evaluated by student performance on final examinations. (35)
Lecture? Lecture plus discussion? Small group work? One-one-one tutorials? Self-directed independent study.
No difference. Regardless of method, students will earn the same grade on the final examination.
Rubin and Taveggia also compared small classes with large classes, as well as so-called instructor-centered vs. student-centered classrooms (a bizarre, Orwellian construct if I’ve ever heard one. More on that in future installments.)
Just to repeat: When measuring the performance on college final examinations, lecturing is no worse or better than other methods (despite the lecture’s oh-so authoritarian overtones). In fact, it does not matter one whit which teaching method is employed.
Sure, 1968 is a long time ago, but The Teaching-Learning Paradox has been cited over 200 times since then, and there is widespread agreement on its conclusions. Medical educators seem particularly drawn to (and perhaps repulsed by) its conclusions. This is unsurprising given the importance of final exams in medical school, and the heavy content-knowledge required to become a medical professional (again, Rubin and Taveggia are measuring the kind of end-of-semester knowledge acquisition that many of us may find limiting).
Olle Ten Cate, a medical school professor and former president of the Netherlands Association for Medical Education, published an article in 2001 called “What Happens to the Student? The Neglected Variable in Educational Outcome Research” that is largely a response to the problem presented by Rubin and Taveggia. Ten Cate summarizes the problem (and the accompanying feeling of frustration). He also, however, begins searching for a way around the paradox:
Yet, is it conceivable that there really is no difference in the effects of such different treatments in education? How can we sustain the idea that systematically different educational approaches, not during one hour, not a day, or a week, but during four or six full years and thousands of hours of ‘experimental treatment’, will show hardly any measurable differential effect other than student opinion? (83)
He also points to the money that is being wasted on such studies, since it has been clear for decades that the overall conclusion is “No Difference.”
If we put so much money, time and energy in such huge curriculum experiments, some day the community might not remain satisfied with the consistent finding of ‘no difference’.
You could easily connect the conclusions of The Teaching-Learning Paradox to today’s hot teaching technology, online education. A 2009 meta-analysis of online education by the U.S. Department of Education showed no significant differences in the learning outcomes of three different teaching “mediums” (online, web-blended, and face-to-face). The study’s conclusions claim that blended students performed “modestly better,” but if you dig into the study a bit more, it stipulates that “the studies in this meta-analysis do not demonstrate that online learning is superior as a medium,” only that many of these course required more from students and instructors and “It was the combination of elements in the treatment conditions (which was likely to have included additional learning time and materials as well as additional opportunities for collaboration) that produced the observed learning advantages.”
This lines up nicely with Dubin and Taveggia’s conclusions. If I can take some liberties here and subvert Marshall McLuhan, it’s the message, not the medium.
In fact, as Dubin and Taveggia note, there are only two factors that are consistent in all 91 studies they analyzed: students enrolled in a course, and each course featured a textbook. Lecture at them. Make them watch you on TV. Make them do the work on their own. Make them log into a website. Tutor them.
As long as they are enrolled in your course and reading a textbook……
You guessed it: No Difference!
HOW CAN I MAKE A DIFFERENCE?
Are you drowning yet? Rethinking your teaching method? Wondering whether or not this huge push for more interactive, student-centered learning environments (think ice-breakers, small group work, group projects, student agency, one-one-sessions) has been a complete waste of time?
Well, yes. Yes it has.
That is, if your central goal is to deliver content. The evidence seems clear. At the end of the semester, students will know about the same amount of stuff regardless of teaching method***. Read The Teaching-Learning Paradox and then sit through some faculty training on how to engage students. If you’re not furious, you can’t do math. Dubin and Taveggia’s work is lucid and straightforward, and seems to be supported with each new study on teaching methods. If your goal is for your students to obtain content knowledge by the end of the semester, you should be in open revolt against anyone who suggests that one teaching method is superior to another. It simply isn’t true.
What if content delivery isn’t your ultimate goal? And what might Rubin and Taveggia be leaving out? In future posts, I will consider other studies and books that present the issue from a slightly different angle. For now, let me return to Ten Cate’s paper for some possible solutions. First, he provides a potentially depressing anecdote that (after some reflection) presents a way forward from The Teaching-Learning Morass:
Some call it the VanderBlij Effect, after the Dutch math professor who delivered remarkably clear lectures. However, students attending his lectures usually received lower grades at the test than those who had not attended his teaching. The latter were forced to study so hard to master the material that they really grasped it. But the effect we are discussing may affect students in both groups.
Oh my. Even skilled teachers are wasting their time? This story actually offers an important (and hopeful) truth: authentic student-centered environments (and student effort and study time) can have an impact. In fact, this was the only thing that Rubin and Taveggia found that did make a difference:
We found two studies in, the literature which compared some form of study with no study and evaluated their respective outcomes on examinations covering ability to recall or prove knowledge of course content.These studies had a total of six comparisons between groups of students who studied and those who did not, all of which were independent comparisons. The results are significantly in favor of study. (26)
The grand irony of many so-called student-centered learning strategies is that they are just more instructor-centered strategies in disguise. It’s the soft authoritarianism of ceding control. Above, we find that if students actually take their learning into their own hands, it can make a difference. It seems to be the only thing that does. As one of my colleagues says, “I don’t teach no one nothing.”
The lesson of The Teaching-Learning Paradox is that if instructors apply their own methods (whether instructor-centered or student-centered) it will not make a difference. Hence, Ten Cate’s question, “What Happens to the Student?” He claims that the studies Rubin and Taveggia analyzed (and almost all subsequent studies that support their conclusions) have three massive flaws: First, they confuse an independent variable for a dependent variable. That is, the results of a final examination are not really the result of the teaching method, they are an extension of it. This is why, potentially, all of the final exam results do not vary. Second, these studies are not truly blind, and can never be. If they students know they are being taught, they will act differently. Third, the effects on the student are not being measured. Is education simply about inputs and outputs? Is it merely about transferring knowledge? Shouldn’t we be looking for models that measure the effects on student behavior, which is, ultimately the one factor that can make a difference, if we extrapolate from the above mentioned studies on “studying,” and on the true meaning of the VanderBlij Effect?
Maybe The Teaching-Learning Paradox does not present a paradox after all. It might simply be an infinite regress. When the twin mirrors of content delivery and final examination are made to face one another, you get a perfect, endless, pointless reflection.
END OF PART 1
*Perhaps I’m abusing this term. I simply mean that everyone operates from within a particular perspective or set of perspectives, and that we often, consciously or not, make judgments about the world based on assumptions that our perspectives are superior to others. I’m doing it right now. One purpose of this blog post is to point out that educators often charge forth into the classroom under the assumption that their methods of instruction (whether cutting edge or traditional) are the most effective ones available. Evidence to support such claims does not exist.
**The results of the study hold true for different mixtures of methods, such as combing lecture, discussion, and small group work.
*** Later, I hope to discuss the difference between “teaching method” and “teaching style.” I will also discuss some more recent cognitive research. It may well be that “style” is another “method,” and that style will also make “no difference.” I hope not.
Posted by Andrew Neuendorf on December 18, 2013
After reading David Byrne’s recent ode to Iowa (in which he recounts the state’s socialist utopian roots and observes that Iowa “may not be cool, but it might be beyond cool. Here among the winding creeks and fields of corn they may have arrived at some kind of secret satisfaction”), I went looking for more wisdom from the former Talking Heads musician and found his TED talk (which was thankfully decidedly un-TED. By that I mean it wasn’t a breathless, triumphant paean to the coming salvation of our digital overlords).
In fact, Bryne’s presentation is quite understated. He only makes one simple point: the evolution of music can be tied to architecture of performance venues.
This sound obvious, but it carries immense implications, essentially undermining the Romantic notion of creativity’s emergence from individual emotion and intuition. In other words, creativity isn’t the product of inner-magic. It is shaped by the external environment. In fact, Byrne argues that the external form might precede creativity, or, as he writes elsewhere:
So, the order of the process is the reverse from what is often assumed: the consideration of the vessel comes first, and that which fills it comes afterwards. Most of the time we’re not even aware of this tailoring we do. Opportunity is often the mother of invention. The emotional story — “something to get off my chest” — still gets told, but its form is guided by contextual restrictions.
He is writing about music, but I instantly thought of online lectures, and of the classroom in general.
Today, I read an anti-MOOC article in Slate by Jonathan Rees which, among other complaints, trashes the lecture format that appears in some MOOCs:
But the most common way to assess learning in the MOOCs offered by the largest providers is a single multiple-choice question after approximately five-minute chunks of pre-taped lectures. If I had told my tenure committee that I taught history this way, I’d be in another line of work right now.
I know exactly what’s he talking about, but I think he’s missing the point. You simply can’t teach the same way online that you can in the classroom. Had he told his tenure committee that he recorded and uploaded a live 80-minute lecture and discussion session onto Blackboard, he would not have pleased them either. Online instructors who use the 5-minute-and-quiz format are not trying to dumb their product down (that might be the unintentional result). Instead, they’re trying to adapt to a new environment.
I write differently on a blog than I do with my pen and notebook. I teach differently in a lecture hall than in an oval-shaped seminar room. I have some classrooms that I’m still trying to figure out. Exactly how do I teach in here?
The online classroom is one of those. We are all struggling to adapt.
Perhaps I was enamored by Byrne’s star power or his recent praise of Iowa, but I was able to pay attention to his lecture, in part, because of environmental or technological factors (it helped that I was intrigued by his argument as well).
That is, he used pictures, which he changed frequently. Also, the camera angles changed often. I don’t have a crew to replicate the latter, but the former is quite simple to do in an online lecture.
The video below is my relatively recent attempt to make a lecture that suits the online format, minus the camera (I have a hard time making this look natural or finding the right setting. I wish, like Byrne’s TED talk I could be recorded in a hall packed with people). Still, I use ten slides in less than 10 minutes, not MTV fast, but enough perhaps to keep attention.
In this respect, I should probably follow more of Pecha Kucha format, which is 20 slides, 20 seconds per slide. Here is author Dan Pink explaining and demonstrating:
Finally, here is a recent video I did using a webcam and a few pictures and text on PowerPoint. Someone told me it looked like I was talking in a closet. Again, I’m not sure how to make the video appealing without either having the camera pointed up my nostrils or projecting me in the background like a specter:
A regular classroom is just a regular classroom. No one is expecting Literature 101 at 8:00 in the morning to be a Hollywood production. But, once you create a video and upload it to YouTube you are, in a way, competing with the pros.
What is someone with no training in media and performance supposed to do? I guess become famous and let the TED folks film you.
I’ll get right on that.
Posted by Andrew Neuendorf on July 25, 2013
In Part 5 I tried to be a bit more straightforward in my definition of mythology. Being straightforward is kind of a drag, and if all you ever did was sit around creating, compiling, and arguing definitions, after four years I would deem you educated.
Here are three more definitions of “mythology” I use in my course, all written by prominent mythologists:
1) Joseph Campbell, from The Hero with a Thousand Faces:
Mythology is psychology misread as biography, history, and cosmology. Their function is to serve as a powerful picture language for the communication of traditional wisdom.
As I previously discussed with the notion of “myth-as-fugue,” here and here, myths definitely contain elements of biography, history, and cosmology, among other things. However, you wouldn’t want your understanding of Sumerian history to rely entirely (or even largely) on The Epic of Gilgamesh. History is referenced, but not accurately. Historical fact is, in part, the basis for some of what happens in the epic. It’s just transformed into literature and fantasy, a bit like those made-for-TV-movies that used to be so popular. Though, I should note, the idea of understanding history as a collection of verifiable facts is a relatively recent concept. It’s not that those compiling the myths of Gilgamesh were bent on distorting history (although political agendas may have been driving them).
Instead, the recalling and recreating of myths in the present in order to continue the power and promise of the ancestors probably kept history alive in a way that blurred our linear notions of how events unfold. Accuracy in fact and reason did not hold the kind of sway that a direct experience of divine powers did. Once you begin to view the ancient and classical world through their primary values, you have to change your categories of understanding. Myths were not subjected to fact-checkers. The myths were plainly factual each time they were enacted and retold. They succeed via their power to produce effects on the participants. Their truths was blindly obvious, as obvious as the cycle of seasons.
It was, in fact, the movement of time in these cycles that held more sway than any notion of linear history. If the rites were performed and the fertility gods responded, with rain, with floods, with storms, with a good crop, then the facts were readily apparent. We don’t view causation or time this way today, nor did people run around discussing the psychological themes in myths as if they could somehow be teased out and isolated from the performance of the myth in its entirety.
What then does Campbell mean by equating mythology with psychology. If we grant that psychology confronts the psyche and perhaps the soul (as opposed to merely treating problematic symptoms, as a psychiatrist does) then Campbell is rightly claiming a role for mythology that is not occupied by other fields.
Mythology reminds us what it is to be human. It is, in Campbell’s words, a mirror that reflects aspects of our being we often forget or try to oppress. When we read mythology, we can be forced to ask questions about fate, the meaning of life, or deeply held beliefs and emotions. Mythology often recounts the human journey in ways that refuse dissection and classification. It returns us to one of those fundamental questions that are not answerable directly (which is why they are not scientific questions). What am I suppose to do? Who I am? Why does anything exist at all? What is the story of my life? Am I being called to transform my life?
In many ways, similar to other forms of literature, mythology induces reflection, an exploration of the interior spaces. It is perhaps the root of all literature, and therefore more of a radical enabler of reflection.
2) William Irwin Thompson, from The Time Falling Bodies Take to Light:
Myth is the history of the soul.
Thompson sets the history of the soul in opposition to the history of the state, of war, of economics, and technology, or, in other words, the usual markers of history. But what exactly is the history of the soul? It is best to simply refer to a larger context of this quote, which appears numerous times in Thompson’s masterpiece, The Time Falling Bodies Take to Light:
Myth at the level of understanding of the Age of Heroes is symbolic or figurative, but the world is still divided. Level IV is the unitive state of the great mystics; it is a state of being, analogous to music, in which myth is not simply a description, but a performance of the very reality it seeks to describe. Here history becomes the performance of myth, for the experience of recalling (anamnesis) enlightens the individual to see that myth is the history of the soul. The ego is locked into a narrow time frame (Plato’s cave), and so experiences from the other dimensions of the soul are recast into the forms and imagery of the ordinary world, but in the experience of illumination the ego realizes that the narratives that seem to be saying one thing are saying much more. (Page 6)
History is an illusion, or at least a narrow depiction of reality which filters out the pure, direct light or reality and presents a shadow play. Myth alone records the non-linear history of the soul, a history which is constantly denied or forgotten, or just extremely difficult to record. In fact, it has largely gone untold, passed along orally, transmitted in secret, available only to initiates. Myth captures some of this, but must be unlocked to be believed. Thompson’s emphasis on performance reminds us how much of our artistic knowledge is not directly explicable. You must see the painting, hear the music, read the poem. Talking about it or trying to use explanatory language around the edges of an artistic performance might provide insight, but it will always be a secondary, filtered experience.
3) Karen Armstrong, from A Short History of Myth:
We have imagination, a faculty that enables us to think of something that is not immediately present, and that, when we first conceive it, has no objective existence. The imagination is the faculty that produces religion and mythology. ….But the imagination is also the faculty that has enabled scientists to bring new knowledge to light and to invent technology that has made us immeasurably more effective. ….Like science and technology, mythology, as we shall see, is not about opting out of this world, but about enabling us to live more intensely within it.
This quote would have seemed silly perhaps fifteen years ago. No one took imagination seriously then. Something has changed, however, and creativity and imagination are no longer confined to kindergarten classrooms and New Age workshops. In fact, they’re probably in danger of being abused by corporate America and drained of meaning by one too many TED talks extolling their virtues. The early creators of myths were the first “out-of-the-box” thinkers, I suppose. Maybe the Australian aboriginals will start appearing on “Think Different” posters.
Anyway, I deeply appreciate Armstrong’s use of the term “imagination” as a kind of visionary capacity for creating culture and new perspectives for exploring the vital questions of our being. It also stands as a reminder that many of our key scientific advances began as dreams, hunches, intuitions, and flights of fancy. Perhaps mythological imagination is the creative ground out of which the arts and sciences arise.
Posted by Andrew Neuendorf on July 16, 2013
In Part 1, I attempted to explain the complexities of the term “myth” and the difficulties of defining it. Largely, this was a riff on the contemporary understanding of “myth” as “false,” as in, “It’s a myth that turkey makes you sleepy.” (Which is true, by which I mean that you get sleepy on Thanksgiving because you’ve eaten too much, have the day off work, and started drinking wine at 11:00 in the morning just to deal with your extended family, not because of the relatively minuscule levels of tryptophan* in the turkey)
Instead, I’ll be more direct. Here is the working definition of mythology I use in my classes: Mythology is the study of stories exploring fundamental mysteries of existence, especially those pertaining to the following three questions: Where do we come from? What are we? Where are we going?
I borrowed these questions from Paul Gauguin’s painting of the same name, “Where Do We Come From? What are We? Where are We Going?”
I then use each question as a separate unit of study: Where Do We Come From? (creation myths) What are We? (mainly epic tales) Where are We Going? (This third unit can cover apocalyptic narratives and stories of the afterlife, but I also use it as an opportunity to discuss the potentially oxymoronic “Contemporary Mythology,” as well as narratives we use to imagine the future, especially futurism, science fiction, and technological utopianism, about which I’ve recorded a lecture and written an article.)
The majority of the texts we cover in my Mythology course fit comfortably into the standard canon (indeed, my central textbook, World Mythology: An Anthology of the Great Myths and Epics, is printed by McGraw Hill) but I like to define the term so that texts and ideas beyond just the ancient and classical world can be explored.
This leads me to three misnomers about Mythology, which correspond to the categories of Geography, Time Period, and Purpose. Some of this is addressed in my video “Contemporary Mythology.”
1) Geography. In-coming students often assume that mythology comes primarily from Greece, Rome, and wherever Norse is. This is not their fault. America’s educational heritage comes from Europe. American and European literature mainly references myths from these cultures. These myths have been translated more frequently. More texts have survived, and so on. When you look at them on a world map, however, they don’t even account for 1% of the world’s land mass. Myths can be found in every culture, on every continent (well, not sure about Antarctica), and from every religion. (In Part 7 I will discuss the difference between religion and mythology.) Why not explore mythology on a global scale. For centuries it was believed that Homer’s epics were the oldest on the planet, until The Epic of Gilgamesh, a Sumerian text which predates Homer by 1300 years, was discovered. The myths of the West are wonderful, but it’s a big world.
2) Time Period. Mythology is not something that simply stops with the later versions of King Arthur in the 15th century. It’s something we do each and every day. We will continue making myths because this is what humans do best. It makes perfect sense to spend most of the semester reading the canonical myths, from Gilgamesh to Arthur, with stops at every civilization along the way. However, it would be a mistake to assume that mythologizing (the verb form, which simply means to create somewhat exalted stories out of reality) was just something that pre-scientific people did when Wikipedia was not around to provide the answer. In fact, I would argue, we create myths every time we come home and answer the question, “How was your day?” or every time we return from vacation or fishing trips. Mythology is a living field of study. Mythological figures emerge from celebrity culture, sports, and politics on a daily basis. John David Ebert’s film criticism is a great examples of this practice. Furthermore, the myths of the past are alive today in exciting ways. When you see someone gazing lovingly into his glowing screen of social media, you are witnessing the living Narcissus.
3) Purpose. I couldn’t tell you for sure why students sign up for Mythology courses. I do believe a good many have a genuine interest in the subject and find the myths they have heard to be compelling and mysterious and out-of-the-ordinary. Some want to have fun (as much fun as a college course can be, which is to say slightly above mowing the yard). Others may anticipate an easy grade. All of the above might be true, but I believe the purpose of mythology is to reconnect with the mysteries of life and to achieve a sense of wholeness. I can’t grade on such a standard, but it’s no accident that Joseph Campbell quickly found himself transitioning from English professor to something like a self-help workshop guru. This is not a path I want, but it does demonstrate the power of myth (to borrow the title of a wonderful book and interview series Campbell did with Bill Moyers).
Finally, I take great pains to emphasize one key portion of my definition, which is that myths explore mysteries; they do not explain them. Certainly if some portion of a myth takes an actual stab at explaining how giraffes developed long necks and concludes that a crocodile bit down on a giraffe’s head one day and stretched the poor creature out, then we should not deny the place of contemporary science to object.
I do not believe, however, that myths were merely an early attempt at science. Nor do I believe that science can do everything that myths can do. In fact, any good scientist will tell you there are certain questions that are not theirs to ask. Some of these are mythological questions.
*First, I should point out that WordPress wanted me to spell “tryptophan” as “Aristophanes,” which is hilarious to me and five other people. Second, dozens of foods you probably eat each week have more tryptophan in them than turkey. Do you say, “Man, this tryptophan is making me sleepy!” after eating a ham sandwich? No? Then be silent. I’m trying to watch the Lions game.
Posted by Andrew Neuendorf on July 15, 2013
We’ve reached the point in this discussion where we must attempt the impossible: to define “archetype.” (This is difficult enough, especially since I haven’t even properly defined “Mythology,” but have only danced around it in Part 1.)
But, before that, a quick digression. In Part 3, I introduced the vomiting god Bumba, whose barf gave birth to the earth and its creatures. He is, however, not the only mythological figure whose ralphs are heard ’round the world.
In Greek mythology, the titan Chronus swallows his first five children in an attempt to protect his grip on the throne. When the sixth child (a chap named Zeus) is switched with a rock, Chronus swallows the rock and subsequently yaks his other children, the first generation of gods, into the world.
The secretions do not stop there. According to Chapter 2 of Kathryn Valdivia’s online Mythology lectures, creation myths often depend on such bodily emissions as: “vomit, sweat, urination, defecation,” and so on.
What’s going on here? A true reminder that mythologies were formed during the childhood of humanity? Or some kind of Freudian obsession with bodily functions written into translations by repressed priests and shamans? Or are these pre-literate, pre-Christian groups just less uptight about perfectly natural phenomena?
Perhaps, but I think there is reason to believe that such references have a third layer of meaning beyond the literal interpretation (“a god is barfing”) and the figurative (“the god barfing represents how the world was created out of nothing, or possible from a reconstituting of materials rejected by the gods”).
This opens up an entirely different discussion about the function of myth, but I promise I will circle back to discussing “archetypes” before the end of this post.
As I mentioned in Part 2, myths make less sense when plucked from their original context as communal ritual, usually performed as music and poetry, and ritualized for purposes both civic and spiritual. For example, Washington Matthews’ translations of The Navajo Night Chant can perhaps be read as something akin to contemporary poetry when found in the Norton Anthology of World Literature. However, reading it silently (as one would read a poem by John Ashbery or Mary Oliver) can feel a bit odd, especially given the seemingly excessive repetition:
In beauty may I walk.
All day long may I walk.
Through the returning seasons may I walk.
On the trailed marked with pollen may I walk.
With grasshoppers about my feet may I walk.
With dew about my feet may I walk.
With beauty may I walk.
With beauty before me, may I walk.
With beauty behind me, may I walk.
With beauty above me, may I walk.
With beauty below me, may I walk.
With beauty all around me, may I walk.
In old age wandering on a trail of beauty, lively, may I walk.
In old age wandering on a trail of beauty, living again, may I walk.
It is finished in beauty.
It is finished in beauty.
This looks like a typical quiet contemporary poem. But, in practice, it sounds like this (from Voyager’s Sounds of the Earth recording:
The text stretches on for days with chanting and song as part of a healing ceremony intended to purify and transform the sick. The event lasts for nine days, including ten straight hours of dancing on the ninth day. Only when translated, written down, and divorced from its context does it become what we commonly call “literature.”
Though I have never been there, I am certain that sweating and vomiting occur, as it certainly occurs in various other religious ceremonies around the world, many of which include physical deprivation, dehydration, starvation, extreme temperatures, and consummation of what we call “drugs” in the contemporary Western world. Ayahuasca and peyote almost always involve vomiting, as would excessive amounts of wine in Dionysian ceremonies.
If mythological texts are fugue-like (see Part 3), and if one function is to describe the process of being ritually initiated, then perhaps the descriptions of bodily functions are simply (or not so simply) part of the ritual. And, of course, since these are secret groups who by definition must remain mysterious to outsiders, none of this can be said directly. It helps to remember that many myth-makers intentional obscure their meanings by using esoteric language.
So it seems there is a logical explanation why so many myths feature bodily secretions. On one level, it could be mere data, a compilation of what goes on during shamanic rituals and cultic celebrations.
This would not, however, explain the repeated use of such imagery in stories of how the world was made. If a given myth is created over time and takes on layers of meaning in order to reflect the various functions of the myth (i.e. not just a “script” for the ritual, but also a culture’s cosmology) then perhaps certain physical acts came to be seen as microcosms of the divine order.
Vomiting as a physical necessity, but also as part of the customary ritual, yet also as a recreation of the origins of life, so that, in some sense, the participant is returning to the source, beginning again, healing in the deepest way imaginable. (It is telling that Bumba, discussed in Part 3, later walks from village to village in an attempt to cheer up his creations, repeating, “Let joy flood your hearts!” His imperfect creations caused him to vomit, but he is not about to let this ruin the world.)
Why then, across time and space, do so many cultures use images of bodily emissions to explain how the world was made? If we expand the category a little bit, we also find numerous creation myths depicting severed body parts used as the raw material for the creation of land, ocean, and sky. In “The Enuma Elish” (the Babylonian creation myth), for example, Marduk crushes Tiamat’s skull and breaks her body in two like a shellfish, forming from it the sky and the earth. This is a motif that shows up often.
Enter archetypes. When recurring patterns such as these emerge across time, there are three possible explanations: #1) meaningless coincidence/utter obviousness, #2) universal human psychology, or, #3) to put it one of a thousand different ways, divine plan. I think you could make the case that all three point to archetypes. Or, you could make the case that the first two provide a way to explain such recurrences without archetypes, and that the third is just a fantasy.
Either way, defining archetypes is not easy. In C.G. Jung’s writings, an archetype is a pattern that lies beyond the physical world. We can never know the archetype directly, because it is, ultimately an unconscious idea and, as Jung famously said, “The Unconscious is always unconscious.” These unconscious ideas manifest themselves in the world, suggesting patterns over time. The archetype, then, is controlling or guiding our behaviors and actions mostly unbeknownst to us.
As an analogy, it is helpful to think of how most cultures viewed astrology just a few hundred years ago (and as many people still view it today). That is, when events happen in our lives, it is because they are being guided by mysterious forces “in the stars.” Our lives are the products of alignments and intersections, recurring patterns that are something like generic and abstract plans from which a variety of results can be derived. When, for example, Mars is in retrograde, it determines that certain qualities or possibilities will go into effect. The underlying ideas repeat themselves each time this happens, but the results are always different. You can see the patterns, but never know the ultimate idea behind it.
An archetypal symbol, then, is not the pure archetype itself, which can never be known, but an image that suggests the archetype is at work. It could be that no such underlying idea exists, but that something in us is drawn to repeat the activity or to notice the image. (This is the second explanation listed above.) But this intense response to such images would be enough to study them, and perhaps enough to posit some not-quite-so-cosmic archetype at work on our psychology, something akin to universal human meaning.
Take the snake. (No, go ahead, take it, I dare you.) While very few snakes show up in Inuit mythology, the use of snakes and serpents in mythology is widespread. Why have we chosen them to be featured more frequently then, let’s say, the worm. It could be explanation #1: Snakes are scary. Isn’t it obvious. Perhaps, but Carl Sagan wasn’t happy with that explanation. In his book Dragons of Eden, he argued that our fear of snakes results from an earlier time in our history when larger lizards posed a threat to our survival. This led to the use of dragons in mythology.
Something like this, I believe,explains the prevalence of flood stories in mythology. We will discuss this at more length in addressing Gilgamesh in a future post. (Gilgamesh contains an account of Noah’s ark some 1500 years before it appears in the Book of Genesis.) The question is, why do so many myths feature floods? The obvious answer is: many of the great early civilizations were built near rivers which flooded. Duh. The less obvious answer is: floods were unpredictable events that surely seemed like divine intervention (perhaps explanation #2.) The archetypal theory might posit that flood imagery reminds of how the unconscious can well up and take over, as in the tidal wave of blood Carl Jung saw as he rode the train the year the first world war broke out in Europe, and the year his own unconscious visions began taking over his life and almost drowning him. Floods, then, are archetypal images of the sudden and frightening power of the unconscious. They are the Unconscious speaking to us.
Of course, such messages might not be coming from beyond. Freud would have instantly reduced such imagery to a primal place, the womb, which is the first flood we experience. This is, essentially, the reason Freud and Jung underwent a professional separation. To Freud, the unconscious mainly contains our primal urges, the Id, which are in constant battle with the Superego, a layer of morality we acquire from authority figures at an early age. Jung believe the Unconscious had another layer, a second basement, which was filled with universal archetypal imagery we all have access to. This he called the Collective Unconscious, and it’s an important concept for the study of Mythology since Jung’s work is the primary influence for Joseph Campbell, whose theory of recurring narrative patterns across mythology is archetypal to the core. We will discuss this later when we reach epic tales and Joseph Campbell’s Hero’s Journey.
Posted by Andrew Neuendorf on July 15, 2013
In Part 2, I ended by evoking William Irwin Thompson’s notion of “Myth-as-Fugue,” or the idea that ancient and classical myths served multiple purposes and contained a variety of discourses (political, spiritual, historical, etc.). Key to this concept is Thompson’s use of a musical term, “Fugue,” where competing voices cohere (not without tension and dissonance) into a single composition. Here is Bach’s Toccata and Fugue in D minor for organ (trust me, you’ve heard it) with the separate parts represented visually:
The result is a composition that should not make sense, but does. It makes sense in traditional fugues because of the pleasing contrapuntal effect of independent melody lines playing off of each other. As long as one doesn’t mind being pulled in multiple directions, but instead enjoys the dynamic tension that results, fugues can create a richer listening experience, and, some would argue, a whole brain workout that forces the listener to mentally juggle and synthesize multiple, disparate factors.
This is, incidentally, how good poetry works. Often, through linguistic and symbolic ambiguity, a well-wrought poem suggests layers of meaning, sometimes establishing sharply contradicting interpretations.
To cite one simple example, Robert Frost’s “The Mending Wall,” which begins with the line, “Something there is that doesn’t love a wall,” is a poem that gives voice to at least three positions: the highly quotable anti-wall posture of “Good fences make good neighbors,” the skeptical idealist narrator who declares that “before I built a wall I’d ask to know/ what I was walling in or walling out,” and Frost’s own voice, in the background, carefully painting the narrator as a snob who imagines his uneducated traditionalist neighbor as “an old stone savage,” not to mention the poem’s perfect blank verse structure. We have, it seems, a poem about walls that sets in motion competing views on the subject matter, all the while traipsing along in deft, vernacular iambic pentameter (as carefully structured as a stone wall), written by a poet who once declared writing free verse to be “playing tennis without a net.” One final note, it is “frozen groundswell,” or frost (get the pun) that destroys the wall in the beginning of the poem.
Form? Freedom? Love? Suspicion? Equality? Hierarchy? Creation? Destruction? What is this poem about?
My answer is the same answer I give to all my students: read it out loud. That is what it is about.
This reinforces the need to read myths out loud, their connection to oral tradition, and the idea of myth as ritual, discussed in Part 2.
But let’s not go in circles. The same quality that frustrates undergraduates about poetry informs ancient and classical myths: they are multidimensional, multi-directional, ambiguous, contradictory, symbolically rich and diffuse, and densely-packed with all kinds of meanings and associations. “Why can’t you say exactly what you mean?” the frustrated undergraduate demands of the dead poet. “Because,” the dead poet replies, “in order to say exactly what I mean, I must say it inexactly.”
This is mythopoetic language: the art of approaching the mystery mysteriously. One cannot tackle a water buffalo head on.
Language is a good medium for giving directions to the grocery store, for explaining evolution (but not quantum mechanics), and for filling a crowd up with enough pride so they will vote for you. It’s not so good, however, at explaining the essence of experiences that extend beyond its purview, or in questioning why language (or anything) exists in the first place. It gets tangled up in knots at this.
Let’s take Bumba for example, the creator-god found in the Boshongo and the Bakuba traditions of Zaire, who vomits the sun, earth, and humans into existence. It is an act of rejection and creation at once. He is both giving birth and trying to eradicate the discomfort of spent, harmful material. His stomach is both a womb and an underworld. Additionally, vomiting seems an apt metaphor for the scientific narrative of what happens during the Big Bang, when matter violently emerges from a much smaller enclosed space.
Vomiting then, is simultaneously a shortcut to understanding and a digression from it. You wouldn’t want mythology to function otherwise. It wouldn’t be mythology.
Mythopoetic (also “Mythopoeic’) language has been described as myth-making, and if one is going to make a myth, he or she must think poetically, speaking or writing in images dripping in meaning, images which seem to speak directly at first, though soon begin doing abnormal things, associating with other images in leaps and fits, the sorts of images that at once suggest an ancient connection to truth made readily apparent, undeniable symbols from the unconscious, yet which quickly recede behind fog, or shape-shift, or break apart and shatter, reflecting something, anything, what?
In order to work, mythopoetic language must be both old and new, surprising us with what we already know. A prime medium, then, of mythopoetic language is the archetype, an image which negotiates the space between the timeless pattern and the ceaseless manifestation of the present. More on archetypes in Part 4.
Posted by Andrew Neuendorf on July 14, 2013