It’s too late, baby

12 01 2015

Not only am I lazy, I am also a bossy broad.

Bossy and lazy: and I wonder why I don’t date!

Anyway, Amy Klein writes in aeon about her reluctance to tell her 42-year-old friend that it’s too late to begin thinking about freezing her eggs:

What I really want to tell my friend is that if she is serious about having a baby, her best bet would be to go out to the nearest bar and hook up with a stranger – during her 36-hour ovulation window, of course. But I won’t tell her to sleep with a random guy, I won’t ask if she ovulates regularly, nor will I say anything else about the state of her ticking – nearly stopped – biological clock: it’s too delicate a subject.

To which I can only say: if someone brings up her ovaries to me, then I’ma gonna go ahead and tell her that thinking and freezing are not going to get the job done—although I’d recommend a sperm bank rather than the local pub.

Will I also tell her that chances are she’s already infertile? That would depend on the course of the conversation, and, in any case, I’d tell her to talk to her OB-GYN.

Klein is right, however, that most women don’t know that, for most of them, the fertility window is closed by the early forties, and that it begins closing in the late-twenties/early-thirties. Fertility rates do decline throughout the thirties (entering a period of greater variability in the late thirties), but, again after 40 the decline is precipitous.

And IVF won’t help—not if you didn’t create embryos before entering your fifth decade. Yes, some women do conceive their own children throughout their forties, but, as Klein points out, all of those well-known women birthin’ babies at 48 or 50 are either using embryos frozen some time ago or someone else’s eggs. Liza Mundy has more about this in her terrific book, Everything Conceivable:

Studies show that among ART [assisted reproductive technologies] patients who are forty years old and using their own eggs, there is a 25 percent chance of pregnancy over the course of three IVF cycles. The chances diminish to around 18 percent at forty-one and forty-two, 10 percent at forty three, and zero at forty-six.

In 2005, a group of doctors at Cornell surveyed IVF patients over forty-five who had attempted to conceive using their own eggs. Among women between forty-six and forty-nine, not one get pregnant using her own eggs. (p. 42)

And, it should be noted, the odds are even worse for poorer and non-insured women of every age, who may have had untreated medical problems which interfere with or nullify their fertility.

Mundy and Klein both note that a previous attempt by the American Society for Reproductive Medicine to raise awareness that the biological clock only has so many ticks in its tocks caused controversy among (hangs her head in sorrow) some feminist groups (well, the National Organization for Women), for the “pressure” such information would place on women, making them “anxious about their bodies and guilty about their choices”.

(Do I mention here that loooooong ago I was a member of the Sheboygan chapter of NOW? Those women, who fought to bring Planned Parenthood to the county, who had been harassed and threatened, would have hooted then-prez Kim Gandy out of the room for thinking they would have been afraid of a little information.)

Klein quotes Naomi Cahn, author of Test Tube Families, who notes that

‘the politics of reproductive technology are deeply intertwined with the politics of reproduction’ but ‘although the reproductive rights issue has a long feminist genealogy, infertility does not’. Discussion of infertility is threatening to feminists on two levels, she contends: ‘First, it reinforces the importance of motherhood in women’s lives, and second, the spectre of infertility reinforces the difficulty of women’s “having it all”.’

That is not any reason, however, not to spread the word as far and wide as possible:

‘Shunning that information about the relationship between fertility and age, however, ignores biological facts and, ultimately, does a disservice to women both in terms of approaching their own fertility and in providing the legal structure necessary to provide meaning to reproductive choice,’ writes Cahn.

. . .

‘It is only with this information that reproductive choice becomes a meaningful concept,’ Cahn writes. ‘Choice cannot mean only legal control over the means not to have a baby, but must include legal control over the means to have a baby.’


It is sometimes pointed out that it is unfair that men have no legal say in whether a women chooses to continue or to end a pregnancy—and maybe it is, but it’s also how it is. Similarly, maybe it’s unfair that men remain fertile throughout their lives but women do not—and maybe it is, but it’s also how it is.

So better to say how it is (and the earlier the better) than pretend otherwise, so women have the knowledge, and the time, to make the choices that make sense for them.

And if we’ve got to be a little bossy to get the word out, well, then that’s how it is, too.

Just let the red rain splash you

9 12 2014

The executive summary of the Senate Select Committee on Intelligence torture report.

16 absolutely outrageous abuses detailed in the CIA torture report, as outlined by Dylan Matthews.

I was naïve, years ago, in my outrage at the torture committed by the CIA. Yes, the US had enabled torturers (see: School of the Americas) and supported regimes which tortured (see: US domestic surveillance and foreign policy), but somehow, the notion that torture was committed by US government agents seemed over the line in a way that merely enabling and supporting had not.

I don’t know, maybe US-applied torture was over the line in a way US-enabled/supported torture was not, and busting righteously through it busted something fundamental in our foreign policy.

But given, say, the Sand Creek and Marias massacres amongst the general policy of “land clearing” and Indian removal—policies directed by US politicians and agents—wasn’t it a bit precious to decry this late unpleasantness?

Naïveté, I wrote above. No: ignorance. I’d studied (and protested) 20th-century US foreign policy and ignored its 19th century version, the one directly largely against the indigenous people whose former lands now make up the mid- and western United States.

Ta-Nehisi Coates recently wrote that paeans to nonviolence are risible in their ignorance: Taken together, property damage and looting have been the most effective tools of social progress for white people in America. Yes.

A country born in theft and violence—unexceptional in the birth of nation-states—and I somehow managed not to know what, precisely, that birth meant.

I’m rambling, avoiding saying directly what I mean to say: there will be no accountability for torture. Some argue for pardoning those involved as a way to arrive at truth, that by letting go the threat of criminal charges we (the people) can finally learn what crimes were committed, and officially, presidentially, recognize that crimes were committed.

It is doubtful we will get even that.

Still, we have the torture report, and (some) crimes documented which were only previously suspected. Good, knowledge is good.

But then what? Knowledge of torture committed is not sufficient inoculation against torture being committed.

Coming clean will not make us clean.

Boom boom boom

18 09 2014

I am a mine-layer.

Some days—most days?—the most I can accomplish is to fling out enough mines that at least some will burrow into rather than merely roll off of my students.

I’m a pretty good teacher—I’d put myself in the B, B+ range—but I think even the best teachers fail to impart whole systems of thought or history or formulae to their students. One might be able to lay out the sets and subsets, the permutations and exemplars and exceptions, in as straightforward a manner as possible, noting what syncs up with which and where it all falls apart, but beyond the assignment or the test or the essay, the knowledge dissipates.

This isn’t their fault—the student’s, I mean—nor is it the teacher’s. Most of the material covered in a college course can only be fully taken in through repetition, and for many students in many classes, it’s one-and-done: the ticking off of requirements on their way to a degree. What they remember may be courses in their major, and that’s because they run into the same concepts and theories and studies over and over again.

If students are able to see the connections amongst ideas laid out in a 3- or 4-hundred-level course in their field, it likely has less to do with that particular professor than with the accumulation of bits from the 100- and 200-level courses.

So what to do when teaching a 1 or 200-level class, or even an advanced class which is supported by no major?

Lay mines. Try to expose the students to concepts they are likely to encounter again, so that the next time they run across “Aristotle” or “Arendt” or  “deontological ethics”, that little bomb will go off and they’ll say to themselves, Hey, I recognize this! and maybe not feel so estranged from what had seemed strange.

So many metaphors could be used here: taking a student down a path and pointing out enough landmarks so that when they traipse down it again, they’ll say Hey! . . . , and feel more confident in their surroundings, more willing to push further on. Tossing out enough seeds in the hopes that a few take root, sprout. Or maybe repeated vaccinations, priming the immune system to respond when next encountering the invasive idea (tho’ there are clear limits to this last analogy insofar as the knowledge isn’t to be annihilated).

Maybe it’s different for professors at elite schools, with students who’ve already been exposed to and are comfortable with these ideas. Or maybe even at my CUNY school I’d find less mine-laying if I were to teach more advanced-level courses in my field.

But maybe not, or, at least, not the way I teach. Yes, I want them to perform well on tests and papers, but more than that, much more than that, I am greedy enough of their attention that I want them to remember this stuff for the rest of their lives.

I’d rather they get a B and be bothered for decades than get an A and let it all go.

So this might explain why I’m partial to the mine idea: because it allows for the possibility of little bits of insight to explode whenever the student strays over forgotten territory. And if those mines are powerful enough and buried deep enough, there’s a chance those explosions might rearrange her intellectual landscape, might change how he looks at the world.

And yeah, I like the idea of blowing their minds, too.

Blown backwards into the future

14 05 2014

Benjamin conjured history as an angel.

Let’s sit with that for a bit, as it’s a lovely sad conjuring.

There is no repair, not for the angel, not for us. Sad, perhaps, but not unbearably so.

There is also no going back, as that angel learned. If the past is an ocean, then history is diving in and bringing the bits and debris and life to the surface, to the present, to see what we’ve got. We can bring what’s down below to the surface and we can make sense of it, but it is our sense, a present sense. And the things themselves, especially the lives themselves, are changed for having been dragged from the deep.

Diving, digging, spelunking: all this bringing to the surface the bits and debris in attempt to recreate life. History as simulacrum.

And the epochs and eras and moments? Those are the bits highlighted or strung together: the Renaissance or Scientific Revolution or Modernity or the Enlightenment. It gives us a way to see.

Usually, when I speak of seeing, I speak metaphorically. But I wanted literally to see where these different moments were in relation to one another, so I ran parallel timelines of European history—scientific, cultural, religious, political, trade—down sheets of paper taped in my hallway, then plotted out those moments.


This is an incomplete draft—I clearly need to allow more room on the final version—but it’s not hard to see how this moment was understand as Italian Renaissance at its ripest.

Or here, as what we now call the Scientific Revolution gets underway:


These give me that bird’s eye view of the middle centuries of the last millennium; they also make me wonder what isn’t there, isn’t recorded in any of the texts I’m using.

What moments are still underground? And what stories will we tell if we ever unearth them?


And I know things now

7 05 2014

Modernity is dead is in a coma.

Okay, not modernity—modernity is still kickin’—but my medieval/modern project to suss out the beginnings of modernity, yeah, that’s on life support. I’ll probably never pull the plug, but the chances of recovery at this point are slim.

The main problem was that I never had a thesis. As a former post-modernist I was interested in the pre-mod: learning about the last great (Euro) transition might help me to make sense of what may or may not be another transitional moment.

And I learned a lot! I knew pitifully little about European history—couldn’t have told you the difference between the Renaissance and the Enlightenment, that’s how bad I was—and now I know something more. I’d now be comfortable positioning the Renaissance as the final flowering of the medieval era, arguing that the 16th and 17th centuries were the double-hinge between the medieval and the modern, that the Enlightenment was about the new moderns getting chesty, that Nietzsche crowbarred open the crack first noticed by the sophists, and that the medieval era in Europe did not truly end until the end of World War I.

None of these is a particularly novel observation. I make no pretense of expertise nor even much beyond a rudimentary working knowledge: there are still large gaps in my knowledge and large books to be read. And I will continue reading for a very long time.

But I don’t have a point to that reading beyond the knowledge itself. It’s possible that something at some point will present itself as a specific route to be followed, but right now, the past is an ocean, not a river.

That’s all right. I’m a fan of useless knowledge and wandering thoughts.

She blinded me with science

17 02 2014

When to let go and when to hang on?

This is one of the conundrums ways I’ve come to interpret various situations in life big and small. I don’t know that there is ever a correct decision (tho’ I’ll probably make the wrong one), but one chooses, nonetheless.

Which is to say: I choose to hang on to the “science” in political science.

I didn’t always feel this way, and years ago used to emphasize that I was a political theorist, not a political scientist. This was partly due to honesty—I am trained in political theory—and partly to snobbery: I thought political theorists were somehow better than political scientists, what with their grubbing after data and trying to hide their “brute empiricism” behind incomprehensible statistical models.

Physics envy, I sniffed.

After awhile the sniffiness faded, and as I drifted into bioethics, the intradisciplinary disputes faded as well. And as I drifted away from academia, it didn’t much matter anymore.

So why does it matter now?

Dmf dropped this comment after a recent post—

well “science” without repeatable results, falsifiability, and some ability to predict is what, social? lot’s of other good way to experiment/interact with the world other than science…

—and my first reaction was NO!

As I’ve previously mentioned, I don’t trust my first reactions precisely because they are so reactive, but in this case, with second thought, I’ma stick with it.

What dmf offers is the basic Popperian understanding of science, rooted in falsifiability and prediction, and requiring some sort of nomological deductivism. It is widespread in physics, and hewed to more or less in the other natural and biological sciences.

It’s a great model, powerful for understanding the regularities of non-quantum physics and, properly adjusted, for the biosciences, as well.

But do you see the problem?

What dmf describes is a method, one of a set of interpretations within the overall practice of science. It is not science itself.

There is a bit of risk in stating this, insofar as young-earth creationists, intelligent designers, and sundry other woo-sters like to claim the mantle of science as well. If I loose science from its most powerful method, aren’t I setting it up to be overrun by cranks and supernaturalists?


The key to dealing with them is to point out what they’re doing is bad science, which deserves neither respect in general nor class-time in particular. Let them aspire to be scientists; until they actually produce a knowledge which is recognizable as such by those in the field, let them be called failures.

Doing so allows one to get past the no-good-Scotsman problem (as, say, with the Utah chemists who insisted they produced cold fusion in a test tube: not not-scientists, but bad scientists), as well as to recognize that there is a history to science, and that what was good science in one time and place is not good in another.

That might create too much wriggle room for those who hold to Platonic notions of science, and, again, to those who worry that this could be used to argue for an “alternative” physics or chemistry or whatever. But arguing that x science is a practice with a history allows the practitioners of that science to state that those alternatives are bunk.

But back to me (always back to me. . . ).

I hold to the old notion of science as a particular kind of search for knowledge, and as knowledge itself. Because of that, I’m not willing to give up “science” to the natural scientists because those of us in the social sciences are also engaged in a particular kind of search for knowledge. That it is not the same kind of search for the same kind of knowledge does not make it not-knowledge, or not-science.

I can’t remember if it was Peter Winch or Roger Trigg who pointed out that the key to good science was to match the method to the subject: what works best in physics won’t necessarily work best in politics. The problem we in the social sciences have had is that our methods are neither as unified nor as powerful as those in the natural sciences, and that, yes, physics envy has meant that we’ve tried to import methods and ends  which can be unsuitable for learning about our subjects.

So, yes, dmf, there are more ways of interacting with the world than with science. But there are also more ways of practicing science itself.

We just have to figure that out.

Of flesh and blood I’m made

16 01 2014

What is human?

I got into it with commenter The Wet One at TNC’s joint, who chided me not to, in effect, complicate straightforward matters. I responded that straightforward matters often are quite complicated.

In any case, he issued a specific challenge to claims I made regarding the variability of the human across time and space. This request was in response to this statement:

At one level, there is the matter of what counts as “reasonably concrete realities”; I think this varies across time and place.

Related to this is my disagreement with the contention that those outside of the norm have fallen “within the realm of the ‘human’ for all intents and purposes’. They most assuredly have not and to the extent they do today is due to explicit efforts to change our understanding of the human.

Examples, he asked?

As one of the mods was getting ready to close the thread, I could only offer up the easiest one: questions over the status of embryos and fetuses.

Still, while I think that a reasonable response, it is also incomplete, insofar as it doesn’t get at what and who I was thinking of in writing that comment: people with disabilities.

“People with disabilities”: even that phrase isn’t enough, because “disability” itself isn’t necessarily the apt word.  I had referred in an earlier comment to those whose morphology varied from the statistical norm; not all variations are disabilities in even the strictest sense.

In any case, when I went to my bookshelf to try to pull out specific, referenced, examples, I was stopped by that basic question which set off the whole debate: what is human?

Now, in asking that here I mean: how maximal an understanding of the human? Is to be human to be accorded a certain status and protection (“human rights”)? or is it more minimal, in the sense that one sees the other as kin of some sort, tho’ not necessarily of an equal sort?

Arendt argued for a minimalist sense when she noted there was nothing sacred in the “naked” [of the protections of the law] human, meaning that such status granted no particular privilege. That I both do and do not agree with this is the source of my estoppel.

Kuper in Genocide notes that dehumanization often precedes assault—which suggests that before the one goes after the other, that a kinship is recognized which must then be erased. But maybe not. I don’t know.

Is the human in the recognition? If you are akin to us (and we know that we are human), then we will grant such status (for whatever it’s worth) to you. We might still make distinctions amongst us as to who is superior/inferior, but still grant than an inferior human is still human. There’s something to that—something which I perhaps should have emphasized a bit more than I did in my initial go-’round with TWO.

But I also think are cases in which the kinship might repulse rather than draw in: that disgust or horror (or some kind of uncanny valley) gets in the way of seeing the disgusting/horrid/uncanny one as human. I’m thinking of the work of William Ian Miller and Martha Nussbaum, on disgust, and, perhaps, to various histories of medicine,especially regarding the mentally ill. Perhaps I should dig out that old paper on lobotomy. . . .

Oh, and yet another wrinkle: Insofar as I consider the meaning of the human to vary, I don’t know that one can elide differences between the words used to refer to said humans. “Savage” means one thing, “human” another, and the relationship between the two, well, contestable.

I’m rambling, and still without specific, referenced examples for TWO. I can go the easy route, show the 19th century charts comparing Africans to the great apes, the discussion of so-called “primitive peoples” (with the unveiled implication that such peoples weren’t, perhaps, human people). Could I mention that “orangutan” means “person of the forest”, or is that too glib? Too glib, I think. Not glib is the recent decision to limit greatly the use of chimpanzees in federally-funded research—the extension of protections to our kin, because a kinship is recognized.

And back around again. I don’t know that one can meaningfully separated the identity of  a being from the treatment of the identified being; identification and treatment somersault over and over one another.

So if one protections are offered to one member of H. sapiens and it is withdrawn from another, then it seems to say something about the status of that other: that we don’t recognize you as being one of us. We don’t recognize you as human.

If things can be done to someone with schizophrenia (old term: dementia praecox) or psychosis—various sorts of water or electric shocks, say—that would not be done to someone without these afflictions, then one might wonder whether the schizophrenic or psychotic is, in fact, recognized as human, that as long as the affliction is seen to define the being, then that being is not-quite-human.

Ah, so yet another turn. I allowed for the possibility of superior/inferior humans [which might render moot my examples from eugenics and racism]; what of lesser or more human? Is someone who is less human still human? What does that even mean?

Back to biology. Those born with what we now recognize as chromosomal abnormalities have not and are not always taken in, recognized as being “one of us”. A child with cri-du-chat syndrome does not act like a child without; what are the chances such children have always been recognized as human?

Oh, and I’m not even getting into religion and folklore and demons and fairies and whatnot. Is this not already too long?

I can’t re-read this for sense; no, this has all already flown apart.


Get every new post delivered to your Inbox.

Join 1,383 other followers