Organisms Organize

I like this paragraph from page 29 of Robert Kegan’s book In Over Our Heads: The Mental Demands of Modern Life:

By now it should be clear that when I refer to ‘mind’ or ‘mental’ of ‘knowing’ I am not referring to thinking processes alone. I am referring to the person’s meaning-constructive or meaning-organizational capacities. I am referring to the selective, interpretive, executive, construing capacities that psychologists have historically associated with the ‘ego’ or the ‘self.’ I look at people as the active organizers of their experience. ‘Organisms organize,’ the developmental psychologist William Perry once said; ‘and human organisms organize meaning.’ This kind of ‘knowing,’ this work of the mind, is not about ‘cognition’ alone, if what we mean by cognition is thinking divorced from feeling and social relating. It is about the organizing principle we bring to our thinking and our feelings and our relating to others and our relating to parts of ourselves.

A little earlier in the book Kegan claims that “how” we know is more important in terms of development than “what” we know. He gives an example of a 16-year-old boy named Matty who disobeys his curfew. Matty knows that he’s expected to follow the rules. That is, he can memorize the rule and recite it back to you. He might remember the rule and recite it silently (like there is a little angel cartoon on his shoulder) while he’s making his decision. Certainly Matty knows that his parents will be upset, disappointed and worried. That’s pretty high-level cognition: the ability to conjure up the perspectives of others. It’s better than most 5-year-olds. Therefore, we presume that Matty should know better. He’s smart enough to know, but not smart enough to do. This isn’t just a failure of action. It isn’t just a moral failing. It’s a failure of cognitive development, which, in the scheme of things, might actually be out of his control. Better yet, it might be over his head.

You see, Matty knows the right answer, and he knows what’s in the minds of others (namely, his parents) but his concept of self is too small to take in the wider context, a context which includes other people, society’s expectations, universal values…basically anything beyond the limited construction of his self, a construction which displays a name-tag “Matty” and a list of motivations that revolve around his wants, needs, the avoidance of punishment, the seeking of pleasure, the creation and reinforcement of identity features and actions consistent with his conception of the world, a conception which mostly revolves around Matty. He is Matty who has parents. He is also Matty who has parents who have their own thoughts. He can see that; he just can’t be that. He is Matty, family member, not Matty, family. Matty is not operating within a “trans-categorical” construction of self where the concreteness of his point of view is seen as one player in a wider circle of values and relationships. He is Matty. He can’t be Matty seeing Matty as Matty really exists, which is in coordination with an entire ecosystem of relationships. To be the real Matty, Matty must cease being just Matty. Right now, at 16, Matty is bumping up against the threat of this future.

Nietzsche vs. Socrates

If Nietzsche and Socrates had a boxing match, I guess we would have to dress Plato in Socrates’ trunks. (Of course, we assume that Socrates was in Plato’s trunks at some point. Plato and Xenophon, his students, both confirm that young men challenged Socrates’ celebrated powers of appetite suppression….not that there’s anything wrong with that), but I digress.

Nietzsche is fighting for Earth, Plato for Heaven.

Socrates’ faith in the afterlife and trust in the divine voices that instructed him (recounted by Plato in the Apology of Socrates) place his ultimate concern with otherworldliness, a trait ridiculed by Aristophanes in his depiction of Socrates in “The Clouds.”

Second, the theory of Platonic forms creates a metaphysics by which eternal ideas form the ground of being, so to speak, rendering ordinary reality as an imperfect outgrowth, or a kind of fall, from the abstract ideal.

Third, Socrates’ description of the Republic (in the book of same name by Plato) seems to prohibit the discovery of new knowledge or the advancement of science and the arts, thereby trading progress for stability, worldly discovery for immortality, and ultimately favoring ascetic stasis over desire and competition. The philosopher-kings were to live simply with little money and the Republic itself was not to take profit from war. Socrates himself was an ascetic wizard, able to go without food longer than his fellow soldiers and to march on ice with bare feet. He was often late for dinner parties after being lost in thought. An awful lot like the Laputa in Book 3 of Gulliver’s Travels who must be routinely whacked on their heads in order to wake up from their endless thinking.

Nietzsche picked a fight with Socrates/Plato. He detested Socrates and wrote a great deal about Ancient Greece. In Twilight of the Idols, he uses Socrates’ famed ugliness as part of an ad hominem attack, a logical fallacy Nietzsche specialized in. However, his primary argument against Socrates is that the Athenian was decadent, both in method and message. Nietzsche thinks the dialectic is baseless entertainment at best, reason-based tyranny at worst, and that, well, Socrates is a buffoon. This is what he actually says, at least in the esteemed Walter Kaufmann’s translation.

Nietzsche places ultimate concern in this-worldliness, in the human, all-too human, to use his phrasing. The mission is not to escape to, or put hope in some abstract realm, but to take delight in the endless transactions of the things of this world. Yes-saying, as opposed to No-saying.

There is, certainly, something of this Yes-saying throughout  Nietzsche’s work, in particular with the theory of Eternal Recurrence, which, if true, forces us to embrace everything about this existence because a) it’s all there is, and, b) more importantly, every single facet of your life will repeat itself infinitely. Better get used to it.

But aside from this theory, the eternally-ill Nietzsche placed great emphasis on health, energy, strength, and passion, all delights to be enjoyed here, as part of a quest to perfect the human (all content later warped by his Nazi sister; Nietzsche’s was a virulent anti-anti-Semitic. He broke off his friendship with Wagner in part because of Wagner’s growing anti-Semitism and dumbed-down neo-Romantic nationalism).

In fact, the Ubermensch (or Overman) is, as Nietzsche describes it, “man finally overcoming man,” which is curious phrasing suggestive a physical transformation, not a heavenly or angelic one. In fact, he writes in Thus Spoke Zarathustra, that just as man laughs at monkeys, so too someday will the Overman laugh at man. These are purely evolutionary terms, though indicative of a transformative leap.  This is, I would suggest, a this-wordly form of spirituality, not a valueless nihilism, but something of a prescription for what ails us.

Nietzsche would have nothing to do with mythological religion, but he is very much a religious writer, or at least a religious writer in the process of inventing his own religion, left incomplete in The Will to Power, which he never finished before his ill-health (likely caused by a syphilitic condition) drove him into catatonic madness.

I think Nietzsche is just one of many writers whose emphasis on what we could call “human potential” has them looking for solutions here in the  functional world, not in some metaphysical abstract realm. Not on a cloudy mountain peak safely removed from the snares of world, ala Han Shan.

The irony, in Nietzsche’s case, is that he lived a rather removed, ascetic lifestyle, but this was due to his ill-health, which forced him to quit his university post, relocate to Switzerland, and live a discipled, inactive life. His writings, on the contrary, preach a passionate embrace of this world. Socrates, of course, lived his teaching amongst people, in the open air, it seems, but his ideas emphasize another world.

Ray Kurzweil, Henry David Thoreau, and Obi Wan Kenonbi Walk into a Genius Bar…. #change11

Ray Kurzweil, Henry David Thoreau, and Obi Wan Kenonbi Walk into a Genius Bar.

Ray Kurzweil says, “Can you help me with my iPhone? Siri keeps refusing to run for president.”

Henry David Thoreau says, “I need to return this iPad. It is a pretty toy, not a tool. In fact, I am the tool. Each day I roll over and ask, ‘What’s the news?’ only to discover mere gossip. I have wasted my time reading screens. I must learn again to read people.”

Obi Wan Kenobi says nothing, gently tips the screen of his iPad toward the resident genius, and kills him in a single flash. Then, Obi Wan Kenobi vanishes into his Netflix app, leaving nothing behind but a brown robe in a heap on the floor.

~

This is a koan, based on the Zen saying, “When you seen the Buddha walking down the street, kill him!”

Bloom’s Taxonomy According to Seinfeld

Questions No One Knows the Answers To (From Ted-Ed)

Keeping it Simple

I’ll only be asking my students three questions in my Mythology course this summer:

Where did we come from?

Where are we going?

Who are we?

Please Be Useless: A Lecture on the First Day of Class

Welcome to this completely useless class.

I’m glad you could join me in taking this opportunity to be completely useless for three hours a week, not to mention time outside of class spent studying the useless material I’m about to present.

But first, let’s play one of the most useless academic games imaginable: “Define! That! Ambiguous! Term!” You might remember this from such games as “What is Science?” “What is Philosophy?” and, my personal favorite: “What is Good Writing?” I thought today we could go a little “meta” with this useless exercise and play “What does useless mean?” Ah ha! Perfectly useless!

Since the first day of class is always empty on the syllabus, and must be filled the way a jelly doughnut is, we better waste some time by first defining “useful,” and then, via logical deduction, we’ll know what “useless” means. As you’ll learn, trying to define something directly is a little too useful.

So, what is useful? Let’s not strain ourselves. First, be perfectly literal. What are some useful things? Think of things made outside the United States. Tools! Wonderful! The usefulness of tools has inspired me to initiate many a useless tangent. Though I find little use in tools personally (I’ve managed to lead a cultivated life without getting my hands dirty) tool use is an unmistakable sign of intelligence.

I once watched a raven bend a wire hanger into the shape of a swan and then proceed to use it as a visual aid. Since I couldn’t be sure in which tongue he was speaking (the audience was mostly composed of doves) I projected my own content onto the cackling, engendering an inter-penetration of texts (poly-avifaunal and hyper-sub-literate Derridean discourse) that afforded the cacophony a momentary coherency, an illusory congealing less gelatinous than gaseous.

This is all to say that tools (like me and my language, the crow and his crook) create value through a process of creative destruction, the way a simple decision to eat forbidden fruit might make salvation simultaneously less and more likely. Simultaneously at once, no less!

What use then is being useful if the tool you select creates an extension for the natural gift that owns everything? That’s why I say it is better to be useless. If you want everything, you should do nothing.

It’s very much akin to Chuang Tzu’s parable of the useless tree. Sure there is shade and aesthetic appreciation of nature and all that, but when it comes time pay the bills, the tree won’t run and hide, though neither will it reach for a wooden nickel. And that is fine. That is right. That good, beautiful, and true.

I am telling you be useless and everything will work out. Here I am also talking to America. That’s enough for today.

What is the Meaning of Life, Answered….

What is the meaning of life?

Here are 10 answers:

1.  What=is=the=meaning=of=life=?

2. The answer is X. “X” equals your father’s saying, the one you won’t go against, though you like having thought of it so well, good fences making good neighbors, and so on.

3. Point to it, flip to the back of a book, derive it, deduce it, seduce it from its hiding place, club it on the head, stuff it, frame it on the wall right next to all the other trophies.

4. Pick a fight with “of,” suggest, instead, “in.”

5. Adopt the suggestion of #4, then pick a fight with “is,” suggest “are” instead, add an “s” to meaning.

6. Life is full of meanings, but is, itself meaningless. Its meaninglessness is its meaning, which will itself be understood uncountable ways. What does meaninglessness mean? Is it really a meaning, a meaningless meaning?

7. Take issue with “life,” substitute “existence,” ‘being,” or “awareness.” Spend time contemplating the difference between these three words. The act of contemplation (not the results) is itself the answer.

8. A-ha!

9. What is the life of meaning?

10. We must make the disctinction between the meaning of life and the meaning in life. Life itself, whatever that is, is something beyond meaning’s tool-clutching grasp, but something that includes meaning, that contains is, supports it, lets it arise in various forms and perspectives, contradictions, paradoxes, and oxymorons. Life can’t have meaning. The word “life” can have meaning, one and many more, but life itself cannot, it is not equal to language, it includes language. Language does point to more language, though language refers to things, it is ultimately somewhat of a closed system, can only go so far. But you can invest life with all kinds of meaning, make meaning out of it, make it meaningful. Life is full of meaning, but is, itself, meaningless. Its meaninglessness, however, seen in one way, can be said to be meaningful. It is full of meanings but has no meaning.

 

Google is Not God: We Need a New Philosophy for Education and Technology #change11

Google is not God, but the omnipresent search engine commands an unworldly level of trust from its followers. Consider a 2007 study by researchers at Cornell University and College of Charleston, aptly titled, “In Google We Trust.” Students asked to research a topic on Google heavily favored search results with high rankings, regardless of quality or relevance. Even after researchers artificially scrambled the rankings, students still clicked on links that appeared higher up on the page. The students’ eye movements were also tracked, revealing a Sisyphean battle between sight and mind:

When looked at in combination, the behavioral data (clicked choices) and the ocular data indicate that while there might be some implicit awareness of the conflict between the displayed position and their own evaluation of the abstracts, it is either not enough, or not strong enough, to override the effects of displayed position.

The medium is clearly too powerful. Once Google has spoken, it feels unwise to question its authority.

But can we really blame the students? According to technology writer Clive Thompson, in a November 2011 Wired article, educational institutions have failed to teach students how to use search engines effectively:

If they’re naive at Googling, it’s because the ability to judge information is almost never taught in school. Under 2001′s No Child Left Behind Act, elementary and high schools focus on prepping their pupils for reading and math exams. And by the time kids get to college, professors assume they already have this skill.

He goes on to say that internet search engines present a “golden opportunity to train kids in critical thinking.” In order to sort through hundreds of search results, students must evaluate information, consider credibility, and ask crucial questions about context and meaning. According to Thompson, these kinds of critical thinking skills are being ignored in favor of preparation for standardized tests.

Thompson’s explanation makes sense, but the blame doesn’t rest solely with NCLB. We also have a philosophy problem, or, more pointedly, a philosophy shortage. Historian Arnold Toynbee observed that most people choose one of two ideological positions when presented with large-scale social upheaval: futurism or archaism. That is, we either trust wholly in the power of progress, envisioning an imminent utopia, or, we idealize the past and pine for simpler times. These polarities should look familiar. Everyone knows a Luddite who frowns upon technological advances, complains about social media and text messaging, and insists that literacy is on the decline. Likewise, we are overrun with technological utopians who envision a peaceful future enhanced by artificial intelligence and medical wizardry.

The same dualism applies to technology and the world of education. The internet is either a threat to traditional learning methods, or it is the great, democratic vehicle for universal education. Don’t believe me? Try bringing up Wikipedia at a faculty meeting. You’ll likely hear wholesale condemnation. However, students will tell you they live and die by the world’s largest, living encyclopedia. It’s not uncommon to hear Wikipedia described as the most complete expression of Enlightenment aims. Apparently, Wikipedia is either a scourge or a savior.

Toynbee believed that having to choose between archaism and futurism was a false dilemma, and that the true potential for transformation could be found not in some illusory past or future, but by embracing the complexity of the present, warts and all. In Understanding Media, Marshall McLuhan echoes Toynbee, and argues that only the artist is capable of rejecting the false dilemma of archaism and futurism and seeing reality as it is:

But to point back to the day of the horse or to look forward to the coming of antigravitational vehicles is not an adequate response to the challenge of the motorcar. Yet these two uniform ways of backward and forward looking are habitual ways of avoiding the discontinuities of present experience with their demand for sensitive inspection and appraisal. Only the dedicated artist seems to have the power for encountering the present actuality.

What would it look like if educators and students avoided the obvious positions of archaism and futurism, and instead embraced present realities? And why, if McLuhan is correct, does it take an artist to accomplish such a feat?

The true artist, McLuhan claimed, rightly understood the technique of suspended judgment, or the ability to watch the full range of possibilities emerge in the mind before acting. These possibilities include the negative and the positive; the moral, the immoral, and the amoral; and the depressing, the sublime, and the absurd. Then, once everything under the sun (and moon) is present and accounted for, the artist obtains a complete vision of the irreducible complexity of the present moment. Art reflects reality, but only after reality and its shadow are coaxed from their hiding place.

Let us not, then, be quick to judge Google as either angel or devil, neither should educators reach for their default positions on the use the Internet for academic work, at least without first considering the range of challenges we face as the internet and electronic media continue to impact every facet of our lives. It does no good to minimize the damage, yet it serves us poorly to ignore the opportunities these media afford. A third position, neither Luddite nor Utopian, must be adopted.

Another way to say “suspended judgment” is “critical thinking,” a phrase so often found on syllabuses and course objectives that we’ve become immune to its powerful effects. We have, largely, forgotten to apply critical thinking to technology, and, worse yet, forgotten to apply critical thinking to our own views of technology. We are too dazzled, too dazed, or too defeated. It is time, however, and time long overdue, for all of us to wake up. The internet is here, that is clear, and we must get used to it.

First we must orient ourselves, and do so without reflexively reaching after archaism or futurism. This will require, I believe, a conscious effort to observe both the good and the bad, so that we might glimpse directly the impact of technology on education at the present moment, and figure out how to adjust and respond, and then, like McLuhan’s artist, we can begin to create the future.

“In Google We Trust” presents us with just such a unique test case. We know that many students, indeed many non-students, reach for search engines to answer their daily questions: “Who was U.S. president when World War I started?” “Who led the National League in batting average in 1984?” “When will Season 4 of True Blood be released on Netflix?” And so on. Between Google’s search engine, Google Maps, Google Earth, Google Books, YouTube (owned by Google), Google News, and Google Scholar, any pertinent piece of information will arise at the commands of your fingertips. As conceptual poet and provocateur Kenneth Goldsmith has written, “If it doesn’t exist on the internet, it doesn’t exist.”

This unprecedented access to information is an amazing opportunity for students. However, as “In Google We Trust” demonstrates, excess of information creates a problem. Students apparently do not know how to wield this new and promising power, and are prone to reaching out and grabbing the first factoid that floats into their sphere. Online searchers often take Google for granted as a benign, neutral source. Instead, we should borrow a term from literature studies and think of Google as an unreliable narrator, prone to harboring hidden agendas, contradictions, and potential brilliance deep beneath the surface. In other words, educators must put the “critical” back in critical thinking when teaching students to use the internet and should encourage them to be aware of the limitations and biases of Google. Do students know, for example, how to distinguish between paid advertisements and legitimate results? Have they considered the differences between a “.com,” a “.org,” and a “.gov?’ Are they aware that the settings on Google Scholar can be adjusted to pull search results from their school’s library database?

We’ll call this Google 101, which should be taken in conjunction with Wikipedia 101. I actually teach a version of this in my Freshman Composition courses. I start out by asking, “How many students have been told by a teacher to never use Wikipedia for a paper?” All hands go up. Then I ask, “Keep your hands up if you’ve ever used Wikipedia for a paper?” All hands stay up. My informal survey is echoed by peer-reviewed research on the use of Wikipedia, published in the journal First Monday:

Over half of the survey respondents (52 percent) were frequent Wikipedia users — even if an instructor advised against it. Students reported that they frequently, if not always, consulted Wikipedia at some point during their course–related research.

Are these students being willfully disobedient? Do they simply lack the basic capacity for critical thinking? Are they lazy? Not really. Wikipedia generally provides helpful (if, at times, flawed) overviews of an always-expanding range topics. A few different studies rank Wikipedia’s accuracy alongside “real” encyclopedias. (If you don’t believe me, read about Nature’s analysis of the reliability of Wikipedia, which is cited in Wikipedia’s entry “Reliability of Wikipedia.” (How’s that for meta-discourse?) Students are going to use Wikipedia, no matter what teachers say. Banning Wikipedia is about as effective as abstinence-only education.

Our goal should be to teach students how to use Wikipedia critically, how to explore the source material, how to ask important questions, and how to eventually move beyond mere tertiary reference sources and onto legitimate primary and secondary sources. I don’t let my students cite any encyclopedias, dictionaries, or reference sources. These sources are generally too, well…general. (I’d much rather see, for example, Ernest Hemingway’s definition of “writing” than any boring dictionary’s.) However, I tell students that Wikipedia is a great place to start if you know nothing about the topic, or if you want to discover people, events, ideas, or terms for further, in-depth study. Wikipedia is not a cancer on knowledge; it is more like a collection of stem cells: unformed and filled with potential. The information must be developed beyond its current stage.

The idea is not only to get students to be critical and skeptical of such resources, but to understand their tremendous benefits. On Twitter, for example, a few clicks and 140 characters can connect you to a scholar living 5000 miles away, a person who might be able to recommend articles, provide instant feedback, or serve as an interview subject. You also might find yourself swept along with rumors and false tweets, or confused by the terse, indecipherable nature of a conversation filled with abbreviations, acronyms, “@’s” and hashtags. You can follow a revolution unfolding in real time, or take manufactured information from a dictator’s goon posing as a freedom fighter. Likewise, Google might recommend a paid advertisement for a pop psychology book above former American Psychological Association President and Positive Psychology scholar Martin Seligman’s website.

The message to students is clear: the internet is not looking out for you, and it’s certainly not going to do the thinking for you. Educators miss the chance to teach such valuable lessons when they restrict the use of the internet for research.