Monday, June 29, 2009

Heidegger and the American Dream

I've been struggling through Heidegger's Being and Time for six months now, with varying degrees of success. The book is nearly impenetrable in its early chapters, even though the second half of the book is almost readable. One of Heidegger's most appealing and resonant ideas is that there is no such thing as a break with one’s history; such an idea is built on the notion that a person is only what he is at the present time—or even that one is what one chooses to be today. No, says Heidegger, a person is what he has always been, including things that happened to him before he had any kind of real agency, that is, his birth and his upbringing:
The “between” which relates to birth and death already lies in the Being of Dasein. On the other hand, it is by no means the case that Dasein “is” actual in a point of time, and that, apart from this, it is “surrounded” by the non-actuality of its birth and death. Understood existentially, birth is not and never is something past in the sense of something no longer present-at-hand; and death is just as far from having the kind of Being of something still outstanding, not yet present-at-hand but coming along. Factical Dasein exists as born; and, as born, it is already dying, in the sense of Being-towards-death. As long as Dasein factically exists, both the “ends” and their “between” are, and they are in the only way which is possible on the basis of Dasein’s Being as care. (374)
Such a notion flies directly in the face of the American dream, which is, after all, built on the idea that at any time a person can remake himself into anything he would like to be. We see this in the early history of America—or at least the modern popular conception of it—in which the colonies make a clean break with their mother country, forming something new and beautiful and pure. We see this in the conception of America as the “New Eden,” a completely new society with new rules and new life. We see this in the American idea of the “self-made man,” typified in Benjamin Franklin, who throws off the shackles of his upbringing in Boston to become a cosmopolitan Renaissance man.

(On another note, think of the famed American pasttime, baseball. It's sometimes called a "game of redemption" because a player can strike out three times, but he's newly made a hero if he hits a home run to win the game. Baseball may be the ultimate pop-culture expression of the American idea of secular rebirth.)

Problem is, as everyone knows, that Franklin lies all throughout his autobiography, fashioning himself as self-made when he was anything but. No one escapes his past—you are always what you were, even as you add to that at every moment. Indeed, Heidegger suggests that man is the sum total of his experiences—those that “happened,” those that “are happening,” and those that have “yet to happen.” Dasein does not so much exist in history as it is itself history: “In analyzing the historicality of Dasein we shall try to show that this entity is not ‘temporal’ because it ‘stands in history,’ but that, on the contrary, it exists historically and can so exist only because it is temporal in the very basis of its Being” (376). Just look at his stern face up there, as if he's wordlessly telling all us Americans that we're deluding ourselves.

I realized this long before I read Heidegger, when, at the age of 21 or so, I was preparing to leave the South in which I was raised, the South in which my entire family was raised. I had fashioned myself as cosmopolitan, as a person outside my region, outside my upbringing. Then I went to a family reunion, looked at the “hillbillies” around me. (Fun story: My last name is Farmer, and my maternal grandmother’s maiden name was Hicks, so my family literally started where the Farmers met the Hicks.) I realized that no matter how hard I tried, I would never free myself of this background. I am to some extent my family, which means, I suppose, that I am ontologically (on some level) a hillbilly.

This was to some extent a freeing realization. It freed me from the burden of trying to escape this past—an impossible task, as Heidegger points out. And when I moved to Nebraska, I began to realize in a practical way exactly how Southern I am. But I shouldn’t over-simplify, since after all I am to some extent a Nebraskan as well, after spending three years or so as one. And my own children will have all of this in their Dasein—they’ll be a Farmer and a Hicks and a Georgian and an Alabaman and a Nebraskan; they’ll be from the suburbs of Atlanta, like me, and the small towns of South Georgia, like my wife. And they’ll be whatever we make them, wherever they are born, and whatever they make themselves.

I think all of this jives completely with the Christian notion of original sin, which after all posits that people are their ancestors, that the sins of the fathers will be passed on to their sons. It jives less well with the Christian notion of the New Being, of Christ’s deliverance of man into a second creation, a second and fuller humanity, one in which the sins of the past no longer hold sway over us. But it’s important to note that we have not quite received that New Being—as Paul Tillich says, we glimpse it only Now and Then, and as St. Paul says, we have received only the first-fruits of the New Creation. So we’re still grounded in time, at least in this life.

What the American dream attempts, in my opinion, is a secular version of the New Creation without waiting until after death. The country reinvents itself apart from England (and apart, depending on whom you ask, from any redemption through God, even as the Founding Fathers couch their ideas in religious language). Heidegger is right to reject this notion, even though Christians have to reject his assertion that there’s no way out of historicality. It just requires something outside the circuit, and even then the circuit will not be broken in this Zeit.

Friday, June 19, 2009

Friday Links

Wednesday, June 17, 2009

Activists: Questions for Liberals and Conservatives

A few months ago, I took offense at the implication in Aristotle’s Nicomachean Ethics that the words Law and Justice are essentially coterminous. I’m halfway through the Politics now, and he’s addressed this issue, clarifying it to some extent, but it’s left me with a whole new set of questions. Here’s what he says:
Suppose we say the people is the supreme authority, then if they use their numerical superiority to make a distribution of the property of the rich, is not that unjust? It has been done by a valid decision of the sovereign power, yet what can we call it save the very height of injustice?
So the laws of the State must be subject to some higher authority, although Aristotle is a little fuzzy on what that authority is and how we should access it. But the important thing is that he makes it clear that the law is not always just, and III.10 ends with an outright declaration of the distinction:
It might be objected too that it is a bad thing for any human being, subject to all possible disorders and affections of the human mind, to be the sovereign authority, which ought to be reserved for the law itself. But that will not make any difference to the cases we have been discussing; the law itself may have a bias towards oligarchy or democracy, so that exactly the same results will ensue.
He doesn’t say it in these exact words, but obviously his point is that the law is a man-made institution rather than something handed down verbatim from an outside authority, and therefore any objections one makes about human rule must also be made about the law itself.

So who has sway? The rulers or the law? Predictably, Aristotle suggests a balance. On the one hand, “the laws enunciate only general principles and cannot therefore give day-to-day instructions on matters as they arise” (III.15); however, “On the other hand, rulers cannot do without a general principle to guide them; it provides something which, being without personal feelings, is better than that which by its nature does feel” (III.15). Aristotle privileges the law slightly, as he recommends that rulers “only depart from the provisions of the law in cases which the law itself cannot be made to cover” (III.15).

But then he throws a wrench into the gears. As I discussed in an earlier post, for Aristotle, the human soul is divided into two sections: the intellect (strong) and the will/passions (weak). So when he equates the Law with the intellect and human rulers with the passions, we’re back to the drawing board:
he who asks Law to rule is asking God and Intelligence and no others to rule; while he who asks for the rule of a human being is bringing in a wild beast; for human passions are like a wild beast and strong feelings lead astray rulers and the very best of men. In law you have the intellect without the passions.
So we’re back to Law and Justice being coterminous, and the question still stands: Who makes the Law, if it’s not handed down to Aristotle from Mount Sinai?

My question for my readers: How does this schema relate to so-called “activist judges” on the Supreme Court? Should the law—the general principle, in other words—be changed based on changing times, or when the specific situation would require a complete overhaul? The political right seems to see the law as sacrosanct, the left as a general principle. I think Aristotle falls into the former camp, but thankfully we don’t have to accept his word as—no pun intended—law.

So if you’re a conservative, I want to know why the Constitution is a sort of quasi-divine document that Supreme Court Justices can’t change without suffering the slings and arrows of “activism” (and more recently, “empathy”). If the people who wrote it were human beings, why can’t the Constitution be wrong?

And you’re a liberal, I want to know how much we can change the Constitution before we stop having a guiding principle for our government? If the Law is supposed to hold true in general cases, how much can we change that Law without having nothing on which to rest our society?

I have no answers, myself.

Monday, June 15, 2009

Does Democracy Rot Your Soul?

When I was a secretary for an academic department a few years ago, the professors were a little surprised to learn that I—apparently unlike many other secretaries they’d had—was perfectly willing to bend my will to theirs. My anti-authoritarian streak began early, it’s true, but by the end of my undergraduate years, it had to a large extent died out. This happens, I suppose, as a person grows up. Or at least it’s supposed to.

“Some people,” said one of the professors, “take ‘All men are created equal’ to mean that we’re all on the same level in a practical sense.”

“Well, of course not,” I replied. “We’re all created in equal in terms of ontological value, but obviously your job is higher than mine and I shouldn’t pretend we’re on the same social level.” The problem, as I saw it then and as I continue to see it, is that somehow we’ve equated who we are as people with what we do—deep down, I think Americans believe that if you have a better job you’re worth more as a human being. That, for obvious reasons, leads to the attitude “You can’t tell me what to do.”

One thing that surprised me when I first started reading Plato was that he’s no friend of democracy. The ancient Greeks, after all, invented the form. I learned this fact in middle school, and I can’t remember if my teacher bothered to tell us that the most notable of them hated it with the depth of their beings.

In The Republic, Plato places it at the very bottom of his analysis of political systems, warning that if people aren’t careful, their societies will slide into it. “Democracy,” he says, “originates when the poor win, kill or exile their opponents, and give the rest equal civil rights and opportunities of office, appointment to office being as a rule by lot” (557a). This is a nightmare for him because poverty makes a person incapable of choosing correctly. Democracy becomes an exercise in smoke and mirrors, and if people like it, they like it in the way that “women and children [like] gaily coloured things” (557c).

His language here makes it easy for us to dismiss him. The poor are human beings like everyone else, and they’re not in all cases or completely responsible for their poverty. Marx has told us the ways in which the system creates its own caste system and that the dream of social mobility is to a large extent a lottery rather than a meritocracy. And the sexism of the ancient Greeks (way worse than the sexism either of the Hebrew Bible or of the New Testament) just doesn’t work in a modern world. But we should still take his remarks on democracy seriously, especially when he talks about its effects on the soul.

Before we look at those remarks, though, I should say a word about Greek opinions on the nature of the soul. The common opinion, best I can gleam from reading Plato and Aristotle, is that the soul is composed of two elements, the reason and the will. The reason sits in natural authority over the will—the rational over the emotional. Aristotle uses this schema to justify any number of things, including the rule of men (rational) over women (emotional), but we need not accept that implication to recognize the wisdom of a natural hierarchy of character traits.

So under a political system of democracy, the individual character begins to become democratic, which is to say that the emotional elements of the soul begin to rebel against the rational elements:
For the rest of his life he spends as much money, time and trouble on the unnecessary desires as on the necessary. If he’s lucky and doesn’t get carried to extremes, the tumult will subside as he gets older, some of the exiles will be received back, and the invaders won’t have it all their own way. He’ll establish a kind of equality of pleasures, and will give the pleasure of the moments its turn of complete control till it is satisfied, and then move on to another, so that none is underprivileged and all have their fair share of encouragement. (561b)
It’s hard to ignore this warning given the current economic crisis, which is built from people of all levels of society allowing their pleasures, their unnecessary desires, to rule over their rational minds. Why should a CEO who’s completely trashed his company receive bonuses of millions of dollars? On the other end of society, why should someone who makes $6 an hour—or worse, who lives off of Welfare—own two or three televisions and a satellite dish?

The answer is the same: We’re to a large part controlled by our pleasures. Poor Jimmy Carter tried to tell the country this in the late 1970s and was crucified it: Living with your means requires sacrifice; it requires allowing the rational element of your soul to rule over the emotional element. Unfortunately for Carter, democratic people naturally rebel against this instruction.

Additionally, we begin to believe that no one can rule over us. “Every man is a king,” claimed Huey Long, the populist Louisiana senator, and Lord knows this is what Americans believe at the core of their beings. If it’s true, though, we need to start worrying, lest we end up with one of the two kings given in Aristophanes’ The Knights: the vicious, wicked Cleon, and Agoracritus, the idiot sausage-seller. (It’s true that in Aristophanes’ play, Agoracritus ends up being a pretty good ruler. I think most of us will agree it wouldn’t happen that way in real life.)

Besides, even if it were a good thing for every man to be king, it’s simply not plausible. Long devised the slogan and its accompanying song not to elevate every man into his own ruler but to become their ruler. Americans are told from childhood that anyone can become president—we’ve heard this a million times over—but it’s simply not true. By the time you finish college (and possibly even by the time you enter it), you know if you have a shot of being president—and the overwhelming majority of people don’t. And it’s a good thing.

But there’s a tendency to hold onto this myth, to believe that it’s some stroke of luck and not your native ability that keeps you from ruling the world. And that’s where the anti-authoritarianism comes in. That’s where democracy begins to rot your soul—there’s no reason to listen to the people in charge, since we’re all capable of doing it. You may as well secede.

I’m just kicking this idea around, so I’ll present a few caveats before people start to think I’m advocating some kind of dictatorship or suggesting that there’s never a time to overthrow oppression:

(a) I’m a Protestant, which means my Catholic and Eastern Orthodox friends have every right to point out my hypocrisy here. Protestantism is built upon the notion that we can all be our own priests, and in some of its forms, it completely negates Church hierarchy. Presbyterianism happens not to be one of those forms, but it does suggest a sort of hermeneutic self-rule: Anyone can read the Bible and come up with the correct meaning, and we’re not dependent upon the Church to tell us if we’re right or wrong. The Orthodox are fond of saying that the Protestant Reformation removed the Pope and created billions of little popes.

(b) Plato is unable to come up with a political system that’s more appealing or realistic than the ones he condemns. He essentially advocates the society in Brave New World, with a small group of Guardians ruling over everyone else. No one owns anything, not even his family; women and children are shared equally by all. He somehow believes that the Guardians won’t take advantage of their situation and grow rich off of the fat of the land.

Aristotle rightly condemns aspects of Plato’s republic and advocates instead his typical Golden Mean—not too much tyranny, not too much democracy. I am not far enough into the Politics to evaluate this system.

(c) There’s no doubt that there are leaders, democratically elected and otherwise, who are just plain bad, who drive their societies into the ground and who oppress their people. I have little problem with Dietrich Bonhoeffer’s plot to kill Hitler, for example. (Of course, Hitler was democratically elected and for the most part did what he did with the approval of the masses.) So I have not yet worked out what we should do with tyrants, that is, when submission should stop and self-rule should begin.

(d) Finally, I recognize that I’m as much a product of democratic thought as the rest of us and that our political system is not going to go away any time soon. Further, I don’t particularly think that’s a bad thing. I wouldn’t like going back to a monarchy or an oligarchy or (God forbid) Plato’s republic.

What I’m interested in, I guess, is finding a way to have the political advantages of democracy without its corrosive effects on the individual. What do you guys think? Does political democracy necessarily lead to the democracy of the soul and to the selfish and lazy citizens that believe it’s all owed to them? Is there any way out?

Friday, June 12, 2009

Friday Links

Wednesday, June 10, 2009

Deep in the Big Black Heart of the Sunshine State, Pt. 2

In my last post, I discussed the deep, fundamental anxiety of the early Disney movies—and how that anxiety has largely disappeared since the Second World War. I didn’t bother making a hypothesis as to why that was the case, but I suspect it had something to do with the cheery attitude toward American destiny in the 1950s. (Why things didn’t change back in the 1970s, I have no idea.)

I claimed before that the reason Pixar movies are so artistically successful is that they recapture the spirit of anxiety that Disney largely left behind after Bambi. Now, I suppose, it’s time for me to defend that claim. Spoilers follow, including ones for Up. Consider yourself warned.

I’ll confess it’s been too long since I’ve seen the two Toy Story films and A Bug’s Life for me to talk about them, but I’ll say that (a) if internet rumors are any indication, next year’s Toy Story 3 will feature a gaping hole at its center, as Andy goes off to college and Woody, Buzz, et al, find themselves alone and unwanted; and (b) the Animal Kingdom/Disney’s California Adventure 3-D movie It’s Tough to Be a Bug certainly poses a threat to its audience, especially to children, whose screams of terror have made it hard to hear the show every time I’ve ever seen it.

So instead, I’ll start with Monsters, Inc., which taps into a very specific but universal childhood fear: the monster in the closet. Never mind that most of these monsters turn out to be essentially good people—the operative point is that there’s a deep-seated need in Monstropolis for children to be afraid. If anxiety is defined (as it is by Kierkegaard, Heidegger, and others) as fear without an object, that’s certainly what we’re dealing with in the world influenced but outside of the movie. Children are afraid of monsters, which deep down they know do not exist—therefore, they are afraid of nothing, of an empty space in their closet. Monsters, Inc. plays off of this fear, exploits it before finally putting it (no pun intended) to bed.

That Monstropolis eventually moves beyond its need for children’s screams of fear in favor of their screams of laughter makes no difference; the movie is very clear that there are monsters (we meet two of them and must assume there are more) who scare for the sheer pleasure of it—monsters who would never listen to reason, who are out to get us for the sheer evil of it.

Finding Nemo, on the other hand, begins with a reference to and amplification of the central terror in Bambi. Here Marlin’s wife dies a terrible death just as they’re planning their life together, and the Barrucuda who eats her also goes ahead and takes out all but one of her eggs. Marlin—understandably, although the film doesn’t seem to acknowledge that!—becomes a picture of anxiety, protecting his disabled son (a nod to Dumbo, though Nemo doesn’t get the brutal mocking that his elephantine counterpart does) from the world that took his wife with little to no warning.

Marlin is right—it’s a big, cruel world out there, one that does not particularly care about you, one that’s happy to eat you alive, and though the movie takes a few steps back from the fullness of his anxiety, it largely still paints a picture of a world where something terrible is going to happen to everyone. Marlin and Nemo thus become pictures of what Paul Tillich calls courage, acting in the face of their own anxiety.

The Incredibles has, I believe, the honor of being the first Disney animated feature to be rated PG. The MPAA says it’s for “cartoon violence,” and yes, it deals relatively openly with death, with Syndrome murdering every superhero he can get his hands on—and worse, the blame gets planted squarely on Mr. Incredible’s broad shoulders, since all this death is the product of his coldness decades before.

But the deepest anxiety in the film comes from the suburban ennui the superheroes experience when they attempt to reintegrate into society. It reminds me of the American existentialist novels of the 1950s and 1960s—Mr. Incredible becomes Rabbit Angstrom. Having experienced greatness on the basketball court or in the world of crimefighting, our heroes can’t lower themselves to the “normal” world. It’s no wonder Mr. Incredible steps out, and it’s telling that Elastigirl thinks he’s having an affair.

In this, then, The Incredibles may be the darkest and most anxious of the Disney canon because its anxiety exists in our world. We’re all afraid of losing our parents, ala Bambi or Finding Nemo—but that loss is inevitable. What’s scarier is the notion that we are not special, that we’re going to go through the world in a cubicle, our souls buried beneath TS reports and fluorescent lights. This is a fear that can hardly be named, the essence of anxiety.

The next feature, Cars, takes that nightmare and expands it to cover an entire town. Radiator Springs, too, was once an exceptional town, an adorable little tourist trap, but when Route 66 falls into ruin, so does the town and its people. Anxiety sets in—how can everyone drive right past our lives? How can we be this insignificant?

Last year’s Wall*E, in my opinion the greatest of the Pixar films thus far, is about the dizziness that ensues from the combination of freedom and responsibility, ala Jean-Paul Sartre. The human race has exercised its freedom in a predictably ugly way, by completely destroying the planet and then avoiding its responsibility by vacating the planet for an extended cruise-ship life of overeating and sedentariness.

Our hero is a model of responsibility, a robot left to clean up the entire mess who accidentally learns agency but still does not abandon his responsibility. When he manages to teach that responsibility to the humans on board the cruise ship, deep pain ensues—they’re forced to work, to think, to connect in ways that are difficult and hurtful for them. The movie is in many ways about the end of anxiety, but the characters have go through the swamp to get to the dry land.

Finally, we come to Up. The film lightens up considerably after the first half-hour or so, but the first act may be the darkest thing ever released in a mainstream cartoon. We meet Carl and Ellie Frederickson when they are only children and are treated to a beautiful—and then brutal—wordless tour of their lives together. They’re the happiest couple you can imagine until Ellie gets pregnant and has a miscarriage. (In a movie that children all over the country are flocking to in droves!)

At this point, they come up with the idea of having an adventure in Paradise Falls, South America. Life intervenes, and it never happens. To some extent we have here the image of suburban ennui in The Incredibles, but it’s never implied that Carl and Ellie stand above or outside their society. They’re just normal people who love each other and whose dreams have been crushed by the contingencies of life.

Finally, Carl buys two tickets to Paradise Falls, and as he’s about to give them to his wife, she collapses. Then she dies. All of this is in the first eight minutes of the movie, and yet we feel that we know this couple, and her death is earth-shattering, horrible, and ugly. Carl’s life turns gray and bleak, and he decides to sail his house on 10,000 multicolored balloons to Paradise Falls—presumably to die. The film is built on a death wish built out of deep loneliness.

Now—all this darkness serves to make the brightness at the end of each of these films brighter, in a way that a movie without it—let’s say Brother Bear, my favorite whipping boy in the Disney stable—isn’t bright. My point is not that Pixar creates dark films that will disturb children. As I said last time, I wasn’t disturbed by the similarly dark early Disney films as a child. Rather, this anxiety is something they’re doing right, and I suspect that as long as they keep it up, we will be able to merge our lives with the characters’, and Pixar will stay on top.

Monday, June 8, 2009

Deep in the Big Black Heart of the Sunshine State

I saw Disney/Pixar’s latest, Up, this weekend, and I continue to be impressed with the depth of these guys’ imagination. I can’t imagine how anyone came up with this storyline—an old man loses his wife, attaches balloons to his house, flies to South America, and somehow manages to fight off a pack of angry, trained, electronically talking dogs—but I’m glad they did. It’s a beautiful and moving film, as every Pixar film is, and if it’s not quite as good as Wall*E or Monsters, Inc. or Finding Nemo, it’s a worthy addition to their catalogue.

It’s no secret that Disney needed to be saved, at least in terms of animation. (Disney theme parks are also in need of some touching-up, especially, it seems, Disney’s California Adventure, but that’s another story, and I hate to badmouth Walt Disney World.) After 2004’s commercial bomb, Home on the Range—and the three commercial bombs that preceded it, Michael Eisner noted the success of the Pixar films and made the baffling decision to shut down the 2-D animation program at the Mouse.

Problem is, the reason no one likes, say, Brother Bear is not because it’s hand-drawn—the animation is absolutely gorgeous in that film, some of the best hand-drawn work ever done—but because no one could care less about the story and the characters. Disney ended up with what we can name “Michael Bay syndrome”—the technology doesn’t matter if your story’s a dog, and as The Island’s financial returns demonstrated, you can’t fool your audience forever. That’s why I can’t remember the name, much less the personality, of a single character in Brother Bear (I’ve not seen Home on the Range), whereas I can sing “Pink Elephants on Parade” from Dumbo or even “How Do You Do?” from Song of the South, a movie with its share of problems but with very strong characters.

So you have to care about the pixels on the screen; you in fact have to forget they’re pixels, have to allow yourself to fall in love with Princess Aurora or to want to be best friends with Aladdin. (You can insert your own examples there.) You have to have something to go home with. That’s why when Disney shifted to 3-D animation, it didn’t fix the problem. Chicken Little and Meet the Robinsons are forgettable, just as forgettable as Brother Bear and Atlantis, because they lack story, not because their animation is or is not cutting-edge.

I’ve blogged before about how much I love John Lasseter, how his position as head of the animation department and his decision to (thank you, God and John) reinstate 2-D animation, is going to save Disney as an artistic entity. We’ve already seen signs of it. Last year’s Bolt was not a classic, exactly, but the characters were real in a way that no Disney character has been for nearly a decade; this year’s (2-D!) The Princess and the Frog looks even better.

So here’s what Pixar gets that Disney hasn’t understood in a long time. Here, in other words, is what makes Pixar in 2009 closer to Disney in 1941 than Disney in 2009 or even 1992. All of the early Disney features—for our purposes, let’s define “early” as prewar, which would allow us to work with Snow White and the Seven Dwarfs, Pinocchio, Fantasia, Dumbo, and Bambi—are shiny and beautifully drawn, but all of their prettiness only serves to hide the deep, existential dread at their cores.

I didn’t realize this as a child. I don’t remember being frightened or upset by any of the Disney movies I watched—and I watched nearly all of them. But my re-watching these films as an adult makes me weep and fear for my life. Take Snow White. We all know that the wicked queen attempts to have Snow White killed and, when that fails, she slips her a poison apple. What I didn’t remember is that this apple was never meant to kill her. We’re given the real plan by the queen herself. She turns to the camera as she’s brewing up the poison and says “She’ll be buried alive.” She laughs, and repeats herself, then laughs again. It’s the viewer that’s threatened here—threatened directly in fact.

Pinocchio repeats the trick and makes it even more disturbing by drenching the entire movie in pathos. Gepetto, as we know, is a clockmaker who desperately wants a son. But in the Disney movie, he’s overwhelmingly sad—a man who’s so lonely that he calls his goldfish his “little water baby.” (I can’t type that without my heart metaphorically collapsing in on itself.) This is a deep, ontological loneliness, one that you don’t expect to find in a children’s film.

When Pinocchio is “born,” Gepetto loves him instantly and unconditionally, even insisting that he is a “good boy,” a label based on nothing that corresponds to reality. Pinocchio abandons his father twice, showing no remorse that I can see, and nearly costs Gepetto, his “little water baby” and his cat, the adorable Figaro, their lives in the belly of a whale. If Pinocchio is our hero, if he is an everyman of some sort—and I figure he must be, with his name in the title—we’re indicted here. It’s a Calvinist vision of a certain sort.

That part is sad. It gets scary and disturbing once we see what happens to Pinocchio in show business. Stromboli, whom Christopher Finch calls the most evil of all Disney villains, locks his cash cow in a cage and says—again, to the camera—that when the puppet is no longer profitable, he will simply chop it into firewood. When Pinocchio escapes, of course, he’s again recruited by “Honest John” and sent to Pleasure Island. The proprietor of the island laughs to John that “The boys never come back…as boys.” At this the camera swoops to the front of his face, as he is completely transformed into a devil.

Here we have an image of evil for its own sake, something that does not merely threaten the characters on the screen but threatens the audience as well. This evil, we’re told, exists in the real world, and the villains in these films do not feel like cartoons. They’re something real, something threatening, something horrible.

I’ve written in another post about the darkness at the center of Fantasia, so I’ll touch briefly on Bambi and Dumbo, each of which paints life in ugly and frightening terms. Dumbo is taken away from his mother after she spanks a teenager who pulls on her son’s enormous ears—and the other elephants shun him because of his supposed disability. The “Baby Mine” scene of the film, in which Jumbo, stuck in the “mad elephant” trailer, touches her son’s trunk with her own through the bars of her cage. (No wonder Dumbo gets loaded afterwards!) And I doubt I need to talk about the senseless killing of Bambi’s mother—no doubt the source of nightmares for many a young Disney fan—though I’ll point out that his father also refuses to take care of him.

After the war, though, things changed. The bad guys got more cartoonish, funnier, and even though the animation got more technically sophisticated, the dread disappeared. We’re not the slightest bit afraid of the buffoonish Captain Hook and his even dumber sidekick, Smee. And when villains are more realistic, like Lady Tremaine from Cinderella, we don’t feel personally threatened. We may feel for Cinderella, but there’s not a sense that we’re next.

About the only exception I can think of in the period from 1945-1989 is the rat from Lady and the Tramp, which clearly sends the message that children—and adults, for that matter—shouldn’t sleep easy, for fear something will eat them alive, bite by bite.

The much-vaunted Disney renaissance of the ‘90s moved closer to the original vision of the films, but not all the way. What we see in the ‘90s films (and I’m including 1989’s The Little Mermaid in that list) is a sort of cartoon version of Shakespeare’s villains, as in Jafar/Iago or Hades/Macbeth. In the case of Mermaid’s Ursula, we actually get a cartoon version of a Shakespearean actress, a washed-up old hack with little use in the modern world. The exception is The Hunchback of Notre Dame’s humorless murderer, Frollo—that’s Disney’s darkest film, even if it tacks on a happy ending to the original novel.

So the kind of existential dread embodied in the wicked queen or Stromboli has been absent from Disney films since 1945. This post is getting long, so I’ll talk about how Pixar has brought angst back in a later post.

Friday, June 5, 2009

Friday Links

Monday, June 1, 2009

Two and a Half Reasons Why I Am Not a Tillichian

When pressed as to why he so loves Karl Barth, Tom Marshfield, the protagonist of John Updike’s A Month of Sundays, replies in semi-aesthetic terms that “All I know is when I read Tillich and Bultmann I’m drowning. Reading Barth gives me air I can breathe.” I can agree with this statement, however much I am unable to agree with nearly everything else Marshfield says. Despite the supposed difficulty of Barth’s prose (a claim made mostly by people who have never really read him, in my experience), his theology soars where Tillich’s lags.

But since style is not a great reason to prefer one theologian over another, I will present two other, more legitimate, reasons I don’t care for Tillich, as much as I might admire him. Keep in mind that I have read only the first volume of his Systematic Theology—I may find something in the remaining two volumes to change my mind.

1) Tillich’s formulation of God as the Ground of All Being seems to exclude the possibility of His personality.

This is probably the most famous sound bite of Tillich’s theology, and we find it very early on in the Systematic Theology: “The object of theology is what concerns us ultimately,” and, taking it further, “Our ultimate concern is that which determines our being or not-being.” I am sympathetic to this viewpoint; it has a great deal of poetry to it, and even some biblical backing. After all, God’s famous self declaration—“I am who I Am” (Exodus 3:14, NAS)—could be translated or interpreted as “I am I am,” that is, “I am Being itself.” I’m not a Hebrew scholar and thus can’t say if this formulation is linguistically acceptable, but it is poetically compelling, at least.

It makes sense on an existential level, as well. Augustine famously formulates sin as a void, as nothingness, as that which is not-God or not-good. Once Sartre comes into the picture, that which is not Nothingness is Being. Tillich claims that everything that is finite is threatened with nonbeing—this is where anxiety comes in. If God is not to be anxious, he must not be threatened with nonbeing, that is, He must be Being itself.

The problem comes when you attempt to apply God’s traditional characteristic of personality to His Tillichian status as Being. Tillich is in fact clear that God does not as such exist, that He transcends such words. When it comes to issues of personality, Tillich is frustratingly evasive, bringing the subject up before dismissing it without actually telling us what he thinks.

But logically (and it’s fair to proceed logically with Tillich, as we shall see in a moment), I don’t think you can hold to both God as existence itself and as a specific person, particularly not once Christ enters the picture. The traditional mystery of the Incarnation is that God now exists both everywhere and in one specific place—the divine substance is divided into two personalities but maintains its structural unity.

For Tillich, however, the Incarnation must result in something without personality somehow gaining it—God the Fabric of Existence implants its essence into Jesus of Nazareth. The differences between this and the orthodox view are striking and important, and Tillich’s deviation results in structural damage to the hypostatic union. Again, it’s hard to tell exactly what he believes on this topic, but when he says that “Jesus of Nazareth is the medium of the final revelation because he sacrifices himself completely to Jesus as the Christ” (1.136), it sounds an awful lot as though he’s suggesting that Jesus was all God and that He had to repudiate the human side of Himself.

The beautiful poetry of Tillich’s theology, then, can introduce some very serious problems that I don’t imagine most (relatively) conservative Christians are going to be willing to swallow.

2) Tillich attempts to bind God to reason.

I am not a complete fideist, but I do think the role of reason is in the final analysis limited in the Christian faith. Think of Dante’s Virgil, who represents (among many other things, no doubt) human reason and achievement—he’s able to bring our poet through Hell and Purgatory, but when time comes for Dante to actually enter into the divine presence, he has to jettison his guide and replace him with a new, heavenly one. The message is clear: Reason leads you to the cliff, but then you must make your own jump, receiving a higher reason in return.

That being said, I don’t think God is in any way bound to our conceptions of reason, any more than He is bound to our conceptions of time. I came under a great deal of fire from this at my religious college. The popular question in my philosophy classes was, “Can God make a square circle?” My answer: “Sure.” My interlocutor would then ask what one would look like, to which I could only reply: “How should I know?”

That’s because reason rules the roost in this universe. But God does not exist within the confines of this universe and so the possibility exists that He could operate outside the constraints of what we call reason. That we can’t imagine what this would look like makes no difference at all, since (a) we do live in this universe and thus within reason and (b) we can’t imagine an awful lot of things that we hold to be true, such as what eternity looks and feels like.

This sounds a bit like a masturbatory philosophical problem—and maybe it is, except that the split reveals something about the people on either side of it. To really believe that God is sovereign, to believe that nothing limits His freedom, is to believe that nothing is impossible for Him. A theology that attempts to contain God within reason—even to hold Him to the law of non-contradiction—seems to me too be a theology that at the very least de-emphasizes His sovereignty.

Tillich, feeding so much on traditional 19th-century liberal theology even as he reacts to it, makes just such a move. After the lengthy introduction to Systematic Theology, Tillich devotes the first chapter to what reason is and how it relates to Christian faith. He affirms the lower/higher reason split but goes even further, subjecting God to the laws of the universe at all times. (It is for this reason, too, that he disbelieves in miracles, renaming them signs and wonders and denying their supernatural elements.)

God must operate within reason because of reason’s role as logos. If God is, to beat a dead horse, the Ground of All Being, then He is Himself the logos of the world and thus cannot violate that logos, cannot go against Himself. This sounds all right, but upon closer inspection, it completely ties God to the world as the world—not only can we not have the hope of transcending it, neither can He. That’s disturbing to me.

So both of my problems come from Tillich’s most famous formulation, that of God as Being itself, and both come to some extent from a refusal to allow God to be both Being and a being. I’m wildly curious as to whether or not anyone reading this has come up with or heard of a way to make that happen. I love the poetry of Tillich’s idea—but I don’t like what appear to be its necessary consequences. I’ll stick with Barth for now.