Let it go and so to fade away

24 03 2014

I’ve been circling around and around this post by TNC; still not thinking in straight lines.

Scattered bits: the bad faith of American triumphalism, of progressivism (as Whig history); the shock of my students when I speak plainly about white supremacy; how it is harder for me to speak plainly of male supremacy (/patriarchy?); how white supremacy doesn’t just hurt black people; how male supremacy doesn’t just hurt female people.

And then the posts on waning Christendom in the US, on the erosion of religious structures, what it all means. More circling.

But this: to look to God is to look away, that religious belief seems to me a form of alienation, a scrim between oneself and the world.

Of course, to the believer, it is I who am alienated.

How any of this relates to kenosis, I don’t know.

And through a side door: we carry our troubles with us. If I have a morality, it is that we should carry our troubles with us. We have to learn how to carry them, so they trouble us less, and when memory is enough.

This is one way to find out who we are.

The troubles are ours; they can’t be given up to God without giving up ourselves.

But then, that might be the point. To some.

I’ll try to think better, to gather these flyaway threads.





I turn to my computer like a friend

24 02 2014

This isn’t creepy at all:

Language, [Ray Kurzweil] believes, is the key to everything. “And my project is ultimately to base search on really understanding what the language means. When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.”

Excellent.

Google will know the answer to your question before you have asked it, he says. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.

Nope, not the least bit creepy.

Or it would be if it weren’t horseshit.

Yeah, yeah— “Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity.”—but brute force isn’t always for the win. And a bit of code which allows a computer to understand the documents it scans doesn’t mean that computer will have attained human understanding.

It’s not that I doubt computers can learn in some sense of the word, that it can incorporate algorithms and heuristics which will allow it to attain some kind of understanding of what it learns; I don’t doubt that computer understanding is possible.

It’s just not clear that computer understanding is comparable to human understanding, not least because it’s unclear what human understanding is, and across time and space, becomes.

Human understanding may also incorporate algorithms and heuristics, but I don’t know that it can be reduced to that. It is fragile and unstable and prone to break down, and even when we think we understand, well, maybe we don’t.

And can I mention disagreement in understanding?

Ray Kurzweil is, as the Observer writer Carole Cadwalladr, notes, a “techno-optimist”, someone who believes tech can make turn us all into bionic women and six million dollar men (Better. Stronger. Faster.).

As someone who wears glasses, uses the elevator to trundle my overstuffed laundry bag down a couple of floors, and likes to sit back and watch Leverage on my computer, I ain’t anti-tech, far from it.

But I am a skeptic. Especially of the idea that tech will allow us to escape the human condition.

Maybe someday we will no longer be human, we will be immortal or transformed or perhaps we will truly have figured out some way to transcend the immanent. Perhaps someday we will escape being—we will no longer be.

Actually, we already can achieve that: it’s called dying. But I don’t think that’s what Kurzweil has in mind.

~~~

h/t HuffPo

 





What if God was one of us

5 11 2013

I’m one of those don’t-hate-religion non-religious types. Most of the time.

And then I read smug shit like this:

To a person, the new atheists hold that God is some being in the world, the maximum instance, if you want, of the category of “being.” But this is precisely what Aquinas and serious thinkers in all of the great theistic traditions hold that God is not. Thomas explicitly states that God is not in any genus, including that most generic genus of all, namely being. He is not one thing or individual — however supreme — among many. Rather, God is, in Aquinas’s pithy Latin phrase, esse ipsum subsistens, the sheer act of being itself.

I’m all about being, so you’d think I’d be all over this. You’d be wrong.

Hell, I’ve read Heidegger, and even if I can’t stop myself from muttering “Nazi gasbag” every time I pick him up, I do think he is worth picking up. It’s tough to talk being without talking nonsense, and while ol’ Martin (that “Nazi gasbag”) peddles his share of nonsense, he does also manage to make sense. Unlike Robert Barron.

God is not a supreme item within the universe or alongside of it; rather, God is the sheer ocean of being from whose fullness the universe in its entirety exists.

Actually, this does make a kind of sense: God is everything, such that without God, there is nothing. It’s a handy bit of sleight-o-hand: How does one know God exist? Because without God, there would be nothing. Easy-peasy.

It’s not a bad tautology, as tautologies go, but, like Pascal’s wager or Lewis’s trilemma, it seeks to lock down not just the answer to a question, but the questions themselves. This is THE question, one is told, and no follow-ups and no other possible interpretations, which might lead to other possible responses, are allowed. No questioning the question.

Barron allows that science allows us to learn a great deal about our material reality. The problem, he says, is that these materials are themselves “contingent”, i.e., dependent upon another reality rather than being real in and of themselves. How does he know this? God-is-everything!

We are surrounded on all sides by things that exist but that don’t have to exist.

[…]

Now a moment’s meditation reveals that all of the conditioning elements that I mentioned are themselves, in similar ways, contingent. They don’t explain their existence any more than the computer does. Therefore, unless we permanently postpone the explanation, we have to come, by logical deduction, to some reality which is not contingent and whose very nature is to exist.

Um, no. Perhaps the explanation is that everything is contingent, nothing is necessary, and existence itself a kind of chance, nothing more.

Barron accuses skeptics of incurosity and irrationality for not bothering with the question of why is there something rather than nothing, but not having an answer doesn’t mean the question isn’t asked; not all questions are contingent upon an answer.

As for Why should the universe exist at all? Who says anything about “should”? It does, for now, and for awhile longer. If it someday ends, it doesn’t mean it never existed at all.

Same goes for us. We don’t have to be here, and yet we are, for now. So what are we to do with this chance?

That, to me, is the real question, and wonder, of being.





I won’t recall the names and places of each sad occasion

9 07 2013

Twenty five years, a quarter century, almost half of my life—so far away, in so many ways.

I’ve mentioned before that I no longer recognize the desperately self-destructive person I once was, that on those rare occasions I read journal entries from later in my career as a failed suicide I think Jesus, I wrote this? Who writes this?

For twenty years, a fifth of a century, almost half of my life, I berated myself for my life, and in the midst of that fifth I tried, again, and failed, again, to end it. It would be over a decade before I would, finally, leave it all behind.

It’s been over a decade since I left it all behind.

These swaths of time, overlapping and flapping against one another, floating back into and obscuring past versions of myself.

This is the story of everyone’s life, I have to remind myself. Does anyone recognize who they were, then? Who sustains the same line all the way through?

Still, some lines are sustained, if even fictionally. There are pieces of memory I pick up and thread on to the knotted string I call my life, but I can barely remember who I tried to erase and what remains are these odd hard bits that nonetheless are unsettlingly warm in my hand.

Over a decade since I left it all behind, I cannot hold these strange remains for long without fear I will string them all together and back to that long dissolve. And so before I am too warmed I shake my hand and scatter those remains.

And so there are some ways I cannot know today of who I was before.

This is not a tragedy; this may not even be a loss. I wish I could know, nonetheless.





We might as well try: You make the best of what’s still around

15 07 2012

We’re a mess.

You want to know why social scientists like models and abstractions and formalisms? It’s because we’re a mess, and it’s tough to know where and how to begin in a mess; impose order, and all of a sudden those messes reveal a clean kind of meaning, shorn of stray bits of paper and belly lint and someone suddenly slamming on the brakes for no apparent reason.

This isn’t a knock on modeling, and I’m a big fan of models precisely because they bring clarity, allow us to see patterns where, before, there was only mess. But when using models you can never forget that they are, in fact, models, a cleaned-up and edited version of reality, not reality itself.* Models are great for understanding a particular thing about a general phenomenon or a number of things about a particular phenomenon, but they can be both stretched out of shape trying to explain too much or so stingy in what they take in they explain nothing at all.**

Anyway, I don’t want to get too bogged down*** in measurement or even conscious interpretation, especially since I’m trying to figure out what comes before said measurement or conscious interpretation.

Which is to say, the mess.

If I don’t have a theory or a model for this mess, I do have a direction—find damned-near-indisputably necessary bits to human being.

Damned-near-indisputably-necessary bit 1: We are mortal beings.

We’re born, we live, we die. No one enters life without having been born****, and no one stays forever. Whether there is something before or after life is disputed, as is the significance of that extra-life existence, but, today, every yesterday, and for the foreseeable future, our mortality is sufficiently indisputable as to be called a fact.

D-n-i-n bit 2: We are biological beings.

This goes along with our mortality: as far as is known, everything biological is of necessity mortal. But this has a particular meaning beyond our mortality, since as biological beings we have particular needs required to keep that biology working. We need food and water and protection from both the elements and predators. We can become ill, get better; we break, we mend; we live as physical beings within a particular environment and if we are not able to meet our biological needs within that environment, we either move or die.

D-n-i-n bit 3: We are social beings.

Some people dispute this; those people should be ignored.

This is not about a kumbaya vision of cooperative harmony, but a recognition that we are all helpless at the beginning of life (and many at the end); if we are not cared for during that extended period of helplessness, we die.

Furthermore, given that that period is so extended—ten years, minimum—the process of said care results in the child learning the basics of species-being, that is, language, which in turn allows one to interact with others of our kind.

I want to say more about the centrality of language to human sociality, but that would take me into less-than-indisputably-necessary bits, and the point in this post, at least, is to try to nail down something about us which any model or theory has to take into account if it is worth considering at all.

Do you remember my bit on epistemology-ontology-the practical? Of course you do! Well, I’ve hopped over the epistemological and landed us in the ontological, or, er, the proto-ontological(?!): If I won’t rely on FOUNDATIONS, then I have to at least tack a few boards together before we swing out over the abyss or float down the river or whatever metaphor doesn’t give you vertigo or make you seasick.

Where was I? Yes, the basics: We’re mortal, we’re biological, we’re social.

We’re also other things—important other things, which I’ll tack on in later posts—but I wanted to reiterate those basics on which I not only build my interpretations and theories, but upon which all interpretations and theories about human being should be built. Other people will legitimately tack on other things (that mess gives us a LOT to choose from) and swing or float in different directions, but if they start with such nonsense as “assume a can opener”, well, then they’re engaging in social-science fiction.

I got nothin’ against science fiction—I’m a fan, actually—but if you want to claim you’re saying something “real” about the world, then you better damned well deal with the damned-near-indisputable realities of this world, and our being human in it.

________

*Well, okay, this gets epistemologically tricky, insofar as the view through which one views a phenomenon affects the phenomenon itself. Reality is never just “there”; it’s always and unavoidably worked on. But there is a distinction between unavoidable oft-unconscious interpretation and the conscious imposition of a schema, which is what I’m trying get at, here. The distinction itself matters, and deserves further investigation—but not in this post.

**This goes for theory, as well, although theory tends to err on the side of trying to do too much than too little; a theory which does too little tends to lose its status as ‘theory’.

***That’s why this stuff is in the notes rather than the body. I’m one of those who thinks you ought to be able to skip the footnotes without missing anything important—notes are for sources and elaborations on basic points, not the introduction of novel material—so imma gonna just drop the whole shebang for now.

****What if we ever manage to figure out how to hatch a person or otherwise build one in a lab? What if we figure out how to live forever? Well, then the conditions of existence would have changed and we’d have to figure out what those new conditions mean. But we ain’t there yet.





We might as well try: the prelude

11 07 2012

I should just walk away.

The problem with being a theorist—with being a lazy theorist—is that one is supposed to chase down every last bit of an argument, and that if one doesn’t wish to do so, one if left wondering if this is because the argument doesn’t deserve the effort or because one is lazy?

I’ll take “Both” for two hundred, Alex.

There is a part of me that does think it worthwhile to scatter the arid bits of libertarianism to the wind, and another part that says, Why bother, it’s a shit theory promulgated largely by twitchy obsessives and freshwater economists, so why not leave the whole mess to the key-pounders* on the left and Paul Krugman?

(*This is not a criticism: Go go go!)

I’m certainly heading toward that conclusion, but there’s still a part of me that berates myself for not doing the work of shredding such terrible theory: Yeah, it is a shit theory—not even properly a theory— but I am also lazy and there is something to be gained in the meticulous dismantling of pernicious ideas.

Yet even as I carry that guilt-bag with me toward the off-ramp, I’m wondering if the best way to lighten my load is simply to swap it for a kit-bag full of stuff I can actually use.

Okay, now I’m going to lay that whimpering metaphor aside and get to the point: Why not talk about what does matter, and what ought to be taken into account in any discussion of politics, economics, and society?

I joked the other day that the problem with letting others go first is that they get to set the terms; why not set my own terms?

I’m disgusted with libertarianism because it bears almost no relation to humans or human being; isn’t this the place to begin? And so I will—but not until tomorrow.

Lazy, remember?





They tell you not to hang around and learn what life’s about

4 06 2012

Another late-late, quick-quick:

Started my summer class last week, and man, it was a good start. A small class, but lively, and ready to talk about anything—crucial when you’re stuffed in a room together for 2 1/2 hours at a pop.

(I give them my standard warning: I do love the sound of my own voice, but ye gods, that’s too much even for me. If y’all don’t participate, we’re all going to want to throw ourselves out the window. . . .)

Anyway, what I wanted to mention was their reaction to my standard epistemological-ontological-practical mini-lecture: they could not get enough of it; specifically, they could not get enough of the ontological piece.

Only one student had any familiarity with the word (which, for the purposes of this course, I define as being or being-in-the-world) itself, but they keyed in immediately on the meaning of the concept, especially after I mentioned that while most folks don’t think much or ever about epistemological matters, and while most of live day-to-day at the practical level, the ontological does intrude. Moments of crises or transition, I observed, are when we really question ourselves, who we are and what are we doing.

And with that, they were off, offering all kinds of insights about being and how they’ve handled their own experiences with the question of being. They kept going and going and it’s quite likely almost the entire class would have stayed past the third hour had I not signaled that it was time to go.

And even then, that wasn’t enough: They came up one by one to say something more, anything more, to keep the conversation going. One man, probably around my age, came up to me, eyes wide, and said, I never heard of that word before, but I know exactly what that is. I didn’t know there was a word for that, but I know it, I’ve lived it. He was, simply, stunned.

I joke that my pedagogical mantra is I aim to trouble you, but, honestly, this is the best kind of trouble.

This is why I teach.





Where was I?

29 12 2011

No work, not enough work, too much work, work.

That’s been the last six months. Nowhere near enough money, even with too much work (really blew it on this last freelance job—shoulda charged double), but now things to be evening out: three courses for the spring, half-time admin work for a local-international organization.

I have some idea of what I’ll be doing with the teaching, no clue on what exactly I need to do with the admin work, but hey, I’ve gone from clueless to clue-full before.

~~~

Hey, I’ve got some a few new readers! HI!

Thanks for poking your head through my window! I’ll try not to slam it down on your noggin’. . . .

(And yes, I’ll return the favor and check out your blogs as well, now that I have the time to do so.)

~~~

I really hate not knowing things.

The problem, of course, is that the more I learn, the more I learn what I don’t know. Frustrating, that.

And embarrassing. Before I embarked on my jaunt through the European medieval period, I knew nothing about this history. Nothing.

Oh, something about the break with the Eastern Church in the 11th century, and Luther in the 1500s, but I couldn’t have told you the difference between the Renaissance and the Enlightenment, or between the various emperors and the pope.

Yeah, it was bad.

So now I’m learning stuff (yay!), but I’m running up against the parameters that I had initially set for this project. It was conceived as an investigation of intellectual history, with not much room for social (writ large) history, but I’m too much of a materialist to dismiss the conditions (see below) under which these ideas were generated and spread.

This is a very long way of thanking petefrombaltimore for his suggestions in reading.

Yes, a project like this can sprawl out over any boundaries set—hence my initial attempts at capturing only intellectual history—but sometimes the most interesting bits are discovered in the spillage.

Anyway, I just finished Peter Gay’s The Enlightenment and am now on Diarmaid MacCulloch’s The Reformation; I may then mix in some close-up histories, as well as tackling some of the primary sources.

Can’t say I’ve yet gotten anything solid on the late-margins of modernity by poking around in the early margins, but I am still poking along.

~~~

Got my first round of applause for teaching in. . . ever?

It was for my bioethics course, a class which was terrible the first time I taught it (at another university), pretty good the first time I taught it here, and now, well, good. I’ll continue to tweak it as I go along, but I’ve got a solid set-up which should hold for at least another few semesters.

It’s much easier to keep teaching the same thing over and over—all that prep work is already done—but I get antsy. I don’t think there’s a perfect syllabus or course (see: not a Platonist), so after a certain number of repetitions I overhaul the course to try to capture something missing from the previous go-around.

It’s not always better, and almost always requires adjustment, but it keeps me thinking.

Anyway, the applause.

It was common at UW-Madison to applaud professors at the end of the semester. Most of my classes were large lectures, so the performative aspect of teaching was more apparent than in seminars, but classes were similarly large at Minnesota, and I don’t recall the students applauding professors there.

It’s nice, both to applaud and be applauded. I liked that I could show my appreciation for a good professor (or lack thereof with tepid clapping); it seemed to signal that there was something more going on in that lecture hall than a contractual transmission of information from instructor to user.

The best professors gave us knowledge far and beyond that necessary for a good grade: they gave us an appreciation for the wonder of knowing.

I don’t know if that’s what my students were applauding. I work hard to tamp down my urge to overwhelm them with my words—as the person who constructs the syllabus and leads the discussions, I already have great, if indirect, influence on how they approach the subject—but on this last day of class I gave them a concentrated shot of my approach to bioethics.

I started with a truncated version of the epistemology/ontology/practical lecture, zeroing in on the significance of being (or Being, if you please) in one’s understanding of practical ethics. I then moved on to Hannah Arendt’s distinction between human nature and the human condition, namely, that while we cannot with any certainty know our nature, we can approach our condition.

And the most basic of our conditions are that we are biological beings, we are social beings, and we are mortal beings. We may be more than this, I noted (spiritual, philosophical, etc.), but we are damned-near-incontestably conditioned by our biology, our relationships to others, and the fact that we are born and will some day die.

This matters to bioethics, I argued, because any ethics which does not take account of these conditions cannot be of any practical worth.

(You might think that this would be so obvious as to be banal, but it is not.)

I can’t tell you that consequentialism or deontological ethics or casuistry or any other way is the correct approach, I said. We need standards to keep us from justification-by-convenience, to force a critical appraisal of our actions, but, pace our conditions, we have to allow deviation from those standards: the rules are to serve the human, not the human, the rules.

Finally, I said, circling back around, this is where I center my ethics, on the matter of  human being. What makes us who we are, and what we could become? It’s not that our abilities have to be unique among species, but we should think about ourselves, as humans, in how we approach one another.

We don’t have to be heroes, I observed. It’s not about pulling someone out of a burning car or tackling the bad guy or dodging bullets; it’s about recognizing one another as humans.

And then I told the story of a group of people in a small town in Wisconsin who decided to hold a funeral for an unknown woman who had been found, murdered, in their town. She wasn’t one of their own, and would never know what had been done for her, but through the donations of the funeral home and money raised for a plot and marker, and in the service at the cemetery, these people did in fact claim her as one of their own.

There was nothing heroic in this ordinary act of burying the dead, but by taking care of this dead woman’s body, they recognized her as one of them; they demonstrated their humanity in their recognition of her humanity.

We can take care of one another, I said. Our ethics ought to be centered on how we take care of one another.

They seemed to like that. I didn’t expect the applause—I thought I had gone too far—but even if I had, they didn’t seem to mind.

It was nice.

~~~

As a coda, I’ve consolidated my earwig approach to teaching (“I want this stuff to bother you for the rest of your lives”) into a line stolen and adapted from Serenity:

I aim to trouble you.

It’s not me, really, who can do this, but I can bring the trouble of politics and theory and ethics to my students, and hope that it disturbs them a good long time.





Question of the day: hate and love

25 09 2010

Consider the relatively ubiquitous phrase, oft deployed by religious folk to describe their approach to queer folk and their sexuality:

‘Hate the sin, love the sinner.’

Yeah, it grits in my teeth, and not just for those who deploy it who clearly don’t mean it, but even for those who are sincere, it misses the point.

Consider: ‘Hate the belief, love the believer.’

Again, a variation of this is offered with regard to Christian outreach to/evangelization of Muslims and other heretics, apostates, and unbelievers. Again, too glib.

How would those who (sincerely) use this sentiment react if such a sentiment were deployed against expressed to them?

Seriously, I’m askin’.





The expulsion from the Garden of Eden is the beginning of life as I know it

19 09 2010

I’m a little fuzzy on the whole sin thing.

Yes, something about disobeying God, with apples, snakes, naked people, banishment, knowledge. . . really, if I were religious, I’d surely find this all fascinating, but as I’m not, well, it just seems curious to me.

But one thing I do like about the insistence on the sinfulness of humans is that those propounding on this corruption tend to see it as all-inclusive: Everyone is a sinner, everyone needs grace.

Handy to remember that.

I’d circled this issue in the last two posts, in terms of Christians and TeePers behaving badly, but one of the things I was too angry (!) to deal with in the Wars-of-Religion post and too politically-minded to deal with in confronting Howard Beale is my basic belief that almost all of us carry almost all of the possible characteristics any human being can demonstrate. The proportions may vary, sure, but outside of the exceptional few, I think we’re all capable of the same basic range of thoughts, feelings, and behaviors.

This doesn’t make us all the same: there are clearly differences in the mix, as well as what each of us brings to that mix in terms of conscious effort and habituation.

Oh, crap, I’m getting too windy.

Lemme put it this way: I didn’t post the extensive quote about rampaging Christians (in response to Peretz’s claim that ‘Muslim life is cheap, especially to Muslims’) as a way of saying See! It’s not just Muslims! Christians are bad, too! Boos, all around! No, the point—which I didn’t explicitly make—is that people behaving violently in the name of religion is unsurprising, given that people are capable of behaving violently.

Yes, there are belief systems which explicitly forbid violence, but the existence of pacifist belief systems proves the point: If the adherents weren’t themselves capable of violence and aggression, there’d be little need for a system to discipline them.

Again, another capacity of humans: to restrain ourselves from doing all that we can possibly do.

But why restrain or indulge? What leads Christians in one period to slaughter one another and non-Christians and in another to tolerate and even respect them? What leads Muslims to laud or condemn conquest? What makes rightists or leftists righteously angry and what will they do with that righteousness and anger?

Ask the question instead of assuming the answer.

It’s too easy to say Christians are peaceful and Muslims aggressive (or vice versa), or rightists are patriotic and leftists traitors (or vice versa), especially when the historical evidence indicates otherwise. Nor is it enough to say that x-behavior isn’t representative of true belief, especially when—again—evidence indicates that x-behavior in another time or place was treated as the sine qua non of true belief.

Do you feel the breeze? Sorry, getting windy again.

I just don’t think we humans are better or worse than we were before, nor that we can even define better or worse outside of a particular historical context. Best simply to try to understand what we  mean by these terms, and to recognize what we are capable of.

For better and for worse.

***

Addendum: Perhaps this also the case for other creatures, and how we act towards and respond to them.