That’s really super, supergirl (part 2)

6 12 2018

I spent a fair amount of time in the last post going over the technical aspects of gene transfer, in large part because so much of the concern about the prematurity of Jiankui He’s work centers on those technical aspects. (Ed Yong at The Atlantic offers a terrific roundup of those and other concerns; he also collected a snapshot of initial responses to He’s announcement.)

I want to focus on two, different, questions about germline gene transfer: the point of such gene editing, and what it means for those who’ve been so edited.

First, the point. If your concern is with preventing single-locus (Mendelian) disorders, there’s a much more straightforward way of dealing with the risk: test the embryo for the presence of the lethal genes discard those which test positive (two lethal recessives or one lethal dominant).

This process, known as preimplantation genetic diagnosis, is available at most, if not all, fertility clinics in the US, and generally costs in the low thousands; in fact, some of those who don’t have fertility problems choose to make use of assisted reproductive techs (ART) precisely so the embryos can be tested prior to transfer. If you know you and your partner are carriers for, say, cystic fibrosis, it’s a lot easier simply to transfer “healthy” embryos than to try to edit affected ones.

The upshot is that germline gene transfer for single-locus disorders is unnecessary. (The exception might be for those who create embryos which are all affected, but even then, it might be easier to create new embryos than to edit out problem genes.)

This brings up the question, then, for what germline gene transfer could be necessary, and it appears, at this point, nothing; it could be used only for improvement or enhancement.

Now, I should point out that “enhancement” is looked upon with some suspicion by many, many bioethicists, so using that term is. . . provocative. Still, it’s not unwarranted: He altered a normal (or non-disease) gene in order to enhance the offspring’s resistance to HIV. While it’s questionable as to whether the twins will actually have that greater resistance, the clear intent was create people with a capability they would not otherwise have had.

Otherwise known as enhancement.

I am a skeptic of genetic enhancement, not least because most of our traits are complex or multifactorial. Do you know how many genes are involved in your height? Over 700. Any guesses as to how many are involved in, say, intelligence? Your guess would, at this point, be as good as anyone else’s.

Furthermore, many of our genes are pleiotropic, which means that a single gene may be associated multiple traits. And let’s not even get into epigenetics, which is the study of the process by which environmental factors affect gene expression.

All of this means that attempts to edit our genomes in order to enhance the traits so many express an interest in enhancing (eg, height, intelligence, athletic ability) will not be straightforward. This doesn’t mean that all such edits will fail, but that success is likely far off.

There are some traits which are less complicated, traceable to one or a few genes, so it may be possible to fiddle with those genes, but even then there’d be concerns, as there is with the CCR5 gene He edited, that boosting one aspect of the gene’s expression (resistance to HIV) can cripple another (resistance to West Nile virus).

That germline gene editing may not, strictly speaking, be necessary, doesn’t necessarily mean there’s no point at all to it. Even an enhancement skeptic like me can recognize that not every use is automatically terrible, or that, in the case of an environmental disaster or pandemic, it could actually become necessary for species survival.

But we’re a long way away from knowing enough that such use can currently be justified.

Which brings me to the second point: what happens to those little girls, Nana and Lulu? Are they to be research subjects for the rest of their lives? Will their parents be required to offer them up for study? Will they ever be able to say no to such study? How much of their lives be known? Will they have any control over information about them?

And what about their offspring? Will their own children have to be studied? If, as seems probable, Nana and Lulu are mosaics, then there would certainly be interest in the inheritance of those mixed genomes.

If He’s work is not to be a complete waste, the girls should be studied. But how to balance the need/desire for knowledge about his experiment with their human rights and dignity? After all, they didn’t sign up for any of this.

I should point out that in some ways their birth parallels that of Louise Brown, the first IVF baby. No one knew if creating a human embryo outside of the body and then transferring it back to a woman would result in a healthy child, or whether IVF-offspring would themselves be fertile. (It wasn’t until Louise’s younger sister Natalie, also IVF-conceived, gave birth did we know that IVF babies could make babies the old-fashioned way.) In vitro fertilization (and a variation, intracytoplasmic sperm injection) was an experiment which could have ended in horror; that it didn’t has had the effect of minimizing just how great a leap it was.

So what will happen with Lulu and Nana? If all seems well with them, does that make it all okay? If not, then not?

And, again, how will we know? One of the criticisms of the fertility industry is just how much isn’t known: there is no database of children conceived via ART, nor of women who’ve taken fertility drugs. Yes, it is possible to do research on the health of these women and kids, some of which indicate increased risks to health. Is that work sufficient? It’s been 40 years, and it seems mostly okay; is that good enough?

We can’t go back and retroactively require that all ART babies be surveilled—I’m certainly not suggesting that—but would it make sense, going forward with gene-edited people, to have some way to keep tabs on their health?

Y’all know I’m a privacy crank, so even suggesting some sort of life-long surveillance makes my teeth itch, but if such research is to continue, then a necessary part of that research is information about the people who participated in it. Given that a central tenet of human subjects research protection is the right to withdraw from any study at any time, there’s no ethical way to require people who’ve been gene-edited to submit to lifelong study; it is not out of the question, however, to ask.

Anyway, back to Nana and Lulu, two new people who were created as a science experiment which many of us decry. Would it have been better had they never been born? Better, certainly, had He not plowed past the many cautions to mess with the embryos, but now that the girls are here, well, best to welcome them to the human race.

Advertisements




The sailor who can read the sky

1 09 2016

How nice to not dread teaching.

I’ve mentioned this course before: Politics & Culture. I’m on the 4th version of it, and think I’ll be able to stick with this for quite awhile.

The first (women and human rights) and third (half mash-up, half Banerjee & Duflo’s Poor Economics) were slogs: they never quite came together. The third, built around Nussbaum’s Women and Human Development, was fine, but I got bored with it after awhile.

This version, which I introduced last fall, focuses on the Weimar Republic, and it all came together pretty well. As I did before, I’m using Richard Evans’s The Coming of the Third Reich, a coupla’ chapters of Bernard Crick’s In Defence of Politics, and Carl Schmitt’s The Crisis of Parliamentary Democracy (I’ve already warned the students about this one), as well as various online primary-source documents; for this semester, I’ve shifted a few things around, added some docs and discarded others, but otherwise kept it together.

And, oh yes, as I think I’ve mentioned 10 or 20 times, I totally dig the subject.

Happily, the more I read about it—I’m a little abashed, actually, at how little I knew going into it last year—the more I want to read about it. Which is good, not just for my own curiosity, but because I like to smother a subject.

It’s not enough to know just what’s on the syllabus, but all those bits and lines which both feed into and lead away from those topics. Or, to put it another way, if I want to cover a 4×4 square, I have to paint 6×6 or 8×8. Last year, it was more like 5×5 or even 4 1/2×4 1/2; this year, I think I’ll be closer to 6×6.

The over-painting metaphor no longer works for my bioethics course, which I’ve been teaching for years. Now, it’s about adding dimensions, tipping things over, and, most importantly, being willing to rip apart the fabric in front of the students. I’m now so comfortable with my knowledge of the subject that I’m willing to shred that knowledge, to say, What else is there?

Boredom while teaching a long-taught subject is always a risk—as I noted, I got bored teaching version 2 of Politics & Culture—but teaching long allows one really bring out the sheen on a topic. The problem with v. 2 was that while I cared some, I didn’t care enough about the central topic to want to spend time with it even when I wasn’t teaching it.

That’s not a problem with Weimar, or with biotech. I want to know, for myself, and it’s this greediness which in turn makes me excited to share.





I got life

8 01 2015

Stipulated: Adults get to make whatever boneheaded medical decisions about themselves that they want.

Stipulated: Adults do not get to make whatever boneheaded medical decisions about their children that they want.

Question: Ought a 17-year-old be able to make a boneheaded medical decision about herself?

Cassandra C. is a 17-year-old with Hodgkin lymphoma, a disease which, when treated with chemotherapy, has a high (80-85%) survival rate. Cassandra initially underwent surgery, then two rounds of chemo, before deciding that while she wants to live, she wants to do so without, in the words of her mother, Jackie Fortin, putting “poison” in her body.

It’s not a stretch for a layperson to consider chemo a poison: the patient ingests the drugs with the idea that they will kill the cancer without killing her, and it is the lucky, lucky cancer patient who isn’t sickened by this treatment.

But it is a stretch to think that there exists some other, effective, non-poisonous treatment for Hodgkin’s, not least because there is no good evidence of its existence. Some (#notall. . .) alt-med folks may think oncologists are in league with pharma companies to hide cheap and easy cures to nasty diseases, but I highly doubt there is a conspiracy of cancer docs to keep effective treatments away from their patients just so they can profit from their suffering.

In any case, if Cassandra were 18, she could cease the chemo in search of those non-poisonous treatments, but at not-quite-17-and-a-half, she’s been confined to medical ward by Connecticut state officials and forced to undergo treatment; the Connecticut Supreme Court just reaffirmed that decision by those officials.

Art Caplan (from whom I took a class when he was at Minnesota) wrote a brief editorial that 17 is 17—that is, not 18, and therefore unable to medical decisions on her own behalf. I get the technical point (1718), but I’m not so sure that the consequentialist argument Caplan goes on to make—Hodgkin lymphoma is treatable—ought to carry the day.

After all, if she turned 18 tomorrow, the lymphoma would remain just as treatable, and the absence of that treatment would leave her just as dead.

Cassandra told the AP that

it disgusts her to have “such toxic harmful drugs” in her body and she’d like to explore alternative treatments. She said by text she understands “death is the outcome of refusing chemo” but believes in “the quality of my life, not the quantity.”

“Being forced into the surgery and chemo has traumatized me,” Cassandra wrote in her text. “I do believe I am mature enough to make the decision to refuse the chemo, but it shouldn’t be about maturity, it should be a given human right to decide what you want and don’t want for your own body.”

It is about maturity, actually; the difficulty is determining what counts as maturity?

Is it just about age? Reach 18 years and you’re mature; prior to that, not.

That has both the benefit and drawback of simplicity. It’s a straightforward standard, but one which, strictly applied, seems nonsensical, ascribing a substantive ethical property to passage of time : “January 1 you’re immature, but October 1 you’re mature.”

Age matters—if Cassandra were 10, I’d think there was no ethical problem—but largely as a stand-in for other properties, including the ability to make decisions.

So is maturity about decision-making ability? Well, okay, but what does this mean? Is this about making good (by whatever metric) decisions? And what if someone repeatedly makes bad (b.w.m.) decisions?

If their adults, and those decisions are of a non-criminal nature, we say, Okay, but largely because most of us don’t want to live in a society where we don’t get make decisions about our own lives. We assert the procedural right to decide, regardless of the content of the decision, because we’d rather make our own decisions (good and bad) than have others make them for us.

But teenagers, man, teenagers get to make some decisions and not others, and figuring out what decisions they get to make often does come down to the content of those decisions. If the kid makes good decisions (as determined by the parents), he’s given the leeway to make even more; if not, then not.

And thus the Connecticut Supreme Court has judged the procedural ability of Cassandra to make her own medical decisions on the content of those decisions: it thinks she’s decided badly, and as a result, ought not be able to decide at all.

I get this, I do, but I am made uneasy by it.  What if she had a different disease, with a much lower (40 percent? 30?) survival rate? What if the treatment were more disabling over the long-term? Or what if she doesn’t respond to the treatment? Is there any amount of suffering from the treatment that would lead the hospital to stop?

Or will they only stop when Cassandra turns 18, and is free to decide for herself, whatever the content of that decision?

This is a tough case, and I don’t know that the Court got it wrong. I just don’t know if they got it right, either.





Kathleen Cranley Glass, 1942-2014

20 04 2014

Kathy was kind. She was smart, and she was tough.

But what I will remember, first, is that she was kind.

She was in charge of the Biomedical Ethics Unit at McGill during my postdoc, and while I think I spoke to her on the phone before moving to Montréal, I hadn’t met her before then. I was going to fly up to Montréal to look for an apartment, but she’d assured me that I could find a place upon arrival.

In all my years of knowing her, that might have been the only time she gave me bad advice. (Well, that and suggesting that if I liked Montréal, I’d probably like Boston, too.)

In every other way, however, Kathy was as fine a guide into bioethics and Québec as I could have hoped for. She and her husband Leon invited me over for dinner more times than I could count—in fact, I stayed with them a good chunk of the time I was looking for an apartment—and took me hiking outside of the city, and to various festivals within it.

She also tried to convince me that Montréal bagels were as good as New York bagels, but that didn’t take. (Montréal bagels are fine—and, honestly, given how pillowy so many NY bagels have become of late, certainly the better size—but a bit too sweet for my taste.)

Mostly, though, I remember the many long conversations with her in her office, first in the old building on Peel, then in her corner office in the building on the other side of the street. I’d have been in my office at the end of the day and have wandered over to hers to say goodbye, then end up staying for an hour or two as we talked about ethics and genetics and politics and music and memory.

She was generous with her time and with herself.

Again, she was kind as she worked her way through her and my thoughts, but it was through these long conversations, as well as in our various BMU meetings, seminar, and colloquia, that her tough-mindedness revealed itself. It was so easy to skip past the basics, but Kathy always returned to them, and to the basic necessity of patient and subject protection.

That was Kathy’s abiding concern: how to take care of people, be they patients at the Children’s Hospital, where she served as a clinical ethicist, or when writing about subjects in clinical trials. She and her colleagues (including Stan Shapiro and Charles Weijer) returned again and again to the necessity of clinical equipoise in research trials, especially in regards to trials of psychoactive medications.

All too often psychiatric patients would be—are—offered fewer subjects-protections than other similarly seriously ill patient-subjects: instead of testing new treatments against current ones, researchers test the investigational drug against. . . nothing. Not only will this skew the results by inflating the effects of the drug—which is bad enough—but subjects who might otherwise benefit from current treatments are denied them, and thus, suffer as a direct and entirely predictable result of their participation in the trial.

This, as Kathy would note, is a textbook definition of unethical research.

She and Stan focused on psychiatric patients, but Kathy’s research ranged widely across bioethics and included considerations of genetic and stem cell research. She worked with Bartha Knoppers at the Université de Montréal and Françoise Baylis at Dalhousie in trying to come to grips with the then-novel human embryonic stem cell research.

Bartha and Françoise can be aggressive in argumentation—I am like them in that respect—but Kathy was not one to be flattened by fast-rolling words. She was too acute a thinker.

This is what I missed, at first. Her kindness, her gentleness, was so immediately apparent, that I made the mistake I too often made: that a softness means weakness.

She was soft; she was also sharp. There was no contradiction.

That is a lesson I’m still learning.

I am so sorry that I will never be able to tell her how much she meant to me, personally and intellectually. I am a better thinker for having known her, and a better teacher for having taught alongside her. She is, and will remain, a touchstone. I will miss her for the rest of my life.

She died at home, among her family, April 12. Rest in peace, Kathy.

Thanks to Jonathan Kimmelman for tracking me down and notifying me of Kathy’s death.





Can you hear me, cont.

8 05 2013

One more small bit on normal:

Some bioethicists who worry about enhancement don’t worry about normalization; some embrace enhancement precisely because they think it offers a way out of normalization.

Neither position makes sense insofar as enhancement and normalization are linked.

The enhancement-worriers fret about new techs or practices taking us away from a baseline normal human, yet don’t wonder about the creation of that baseline normal human. The enhancement- embracers think other-than-normal is just dandy, yet don’t consider that enhancement can lead to new normals.

This is not, I must say, the position of all those who write on enhancement and normalization; one of the things I like about Parens’s book Enhancing Human Traits is that it includes plenty o’ pieces by those who weigh both enhancement and normalization.

Me, I think the real issue is normalization, such that my concerns about enhancement are precisely that they might become the new norm. Enhancement leads to questions; normalization feeds off forgetting.

I think forgetting is a bigger problem for humans than questioning.





Can you hear me

7 05 2013

I blew my students’ minds today.

No, not anything brilliant on my part: I brought up an issue in my bioethics course that I’ve mentioned in previous courses—had thought I’d mentioned previously in this course—and a number of them lost it.

I told them that there were deaf people who didn’t think there was anything wrong with being deaf, and furthermore, they’d like you to keep your cochlear implants and whatnot to yourselves, thankyouverymuch.

That did not compute.

Now, the backdrop for this moment of brain splatter was a discussion of social coercion, normalization, enhancement, disability, and morality (among other things). Somewhere in this discussion I noted that devices which are promoted as aiding the disabled might be more about assuaging the discomforts of the non-disabled. This was one of Anita Silver’s points in her essay “A Fatal Attraction to Normalizing” (in Enhancing Human Traits, ed. by Erik Parens), as exemplified by the decision of the Canadian government to push children affected by thalidomide into prostheses and forbidding them to roll or crawl. “The direction of resources to fund artificial limb design and manufacture rather than wheelchair design was influenced by the supposition that walking makes people more socially acceptable than wheeling does.”

A number of them did not like where I was going with this. So how far do we go to accommodate those people, they said. If we’re the majority, shouldn’t they, you know, have to adapt? Are we just supposed to design everything around them?

One of them even complained about ramps: Why should I have to go around and around if I just want to take the stairs?

I pointed out that ramps rarely replace stairs, but are instead treated as an addition, meaning that the stairs remained. I also noted that crappy design is bad for everyone. The building in which the class is held, Carman Hall, is a terribly designed building—you have to go down a flight of steps just to enter the building—and suggested that it’s just possible that being forced to think about accessibility for, say, wheelchair users might just lead to designs which are good for everyone. Curb cuts, I noted, are useful for those pushing strollers or, say, 3 weeks worth of laundry in a cart.

Besides, I noted, at some point we’re all, if we’re lucky, going to get old and frail, so designing for access is, in effect, designing for everyone.

In any case, my mind was a little blown by their sense that accommodating people who came in a model unlike themselves was unfair.

Okay, now back to their shorted neural circuits. Deafness, I noted, is a condition, and some who are deaf are also a part of the Deaf community. These Deaf members see themselves as distinct, not disabled, and their community as worth preserving; as such, they see cochlear implants as a way of eliminating members of that community. Furthermore, since cochlear implants are imperfect, not only will these deaf people not gain the full range of sound as hearing people, they will never gain full status as hearing people: they will also be lesser “normals” than full and “normally” Deaf.

But why would they want to be deaf? they asked. Doesn’t that limit them? Why wouldn’t they want cochlear implants?

Well, I noted, we’re all hearing in our class, so if we lost our hearing we would, in fact, experience it as a loss. But while we might be able to see only the limitations of deafness, they see other capacities enabled by it.

They were dubious. What about contacts, one of the students asked. I’d be blind without my contacts. J., I said, you would not be blind, you would simply have bad sight, which is more akin to being hard of hearing than being deaf.

(That said, it was a provocative question: is their a Blind community akin to the Deaf community? And what would be the implications of that? What are the implications of a lack of a Blind community?)

I’m used to students gasping a bit at the thought that Deaf people might not have a problem with their own deafness, but I can usually get them to consider that the problem with deafness is the problem that hearing people have with deafness. No, I’m trying to force them to accept the Deaf argument—I’m not quite sure what to make of it myself—but I do want to crowbar them out of their own defaults, their own unthinking attachments to normal.

There are streams within bioethics which maintain their own unthinking attachments to normal, as well as those who prefer to poke a stick into the concept. I’m more in the latter camp (big surprise), but as I think normalizing is impossible to avoid, my approach is simply to unsettle, and be unsettled by, the normal, and go from there.

The students weren’t so much unsettled as shocked, and given that shocking can lead to reaction rather than reflection, I guess I shouldn’t be shocked that they held ever tighter to their own normality.





Here’s a man who lives a life

23 01 2013

I’m a big fan of science, and an increasingly big fan of science fiction.

I do, however, prefer that, on a practical level, we note the difference between the two.

There’s a lot to be said for speculation—one of the roots of political science is an extended speculation on the construction of a just society—but while I am not opposed to speculation informing practice, the substitution of what-if thinking for practical thought (phronēsis) in politics results in farce, disaster, or farcical disaster.

So too in science.

Wondering about a clean and inexhaustible source of energy can lead to experiments which point the way to cleaner and longer-lasting energy sources; it can also lead to non-replicable claims about desktop cold fusion. The difference between the two is the work.

You have to do the work, work which includes observation, experimentation, and rigorous theorizing. You don’t have to know everything at the outset—that’s one of the uses of experimentation—but to go from brain-storm to science you have to test your ideas.

This is all a very roundabout way of saying that cloning to make Neandertals is a bad idea.

Biologist George Church thinks synthesizing a Neandertal would be a good idea, mainly because it would diversify the “monoculture” of the Homo sapiens.

My first response is: this is just dumb. The genome of H. sapiens is syncretic, containing DNA from, yes, Neandertals, Denisovans, and possibly other archaic species, as well as microbial species. Given all of the varieties of life on this planet, I guess you could make the case for a lack of variety among humans, but calling us a “monoculture” seems rather to stretch the meaning of the term.

My second response is: this is just dumb. Church assumes a greater efficiency for cloning complex species than currently exists. Yes, cows and dogs and cats and frogs have all been cloned, but over 90 percent of all cloning attempts fail. Human pregnancy is notably inefficient—only 20-40% of all fertilized eggs result in a live birth—so it is tough to see why one would trumpet a lab process which is even more scattershot than what happens in nature.

Furthermore, those clones which are successfully produced nonetheless tend to be less healthy than the results of sexual reproduction.

Finally, all cloned animals require a surrogate mother in which to gestate. Given the low success rates of clones birthed by members of their own species, what are the chances that an H. sapiens woman would be able to bring a Neandertal clone to term—and without harming herself in the process?

I’m not against cloning, for the record. The replication of DNA segments and microbial life forms is a standard part of lab practice, and replicated tissues organs could conceivably have a role in regenerative medicine.

But—and this is my third response—advocating human and near-human cloning is at this point scientifically irresponsible. The furthest cloning has advanced in primates is the cloning of monkey embryos, that is, there has been no successful reproductive cloning of a primate.

To repeat: there has been no successful reproductive cloning of our closest genetic relatives. And Church thinks we could clone a Neandertal, easy-peasy?

No.

There are all kinds of ethical questions about cloning, of course, but in the form of bio-ethics I practice, one undergirded by the necessity of phronēsis, the first question I ask is: Is this already happening? Is this close to happening?

If the answer is No, then I turn my attention to those practices for which the answer is Yes.

Cloning is in-between: It is already happening in some species, but the process is so fraught that the inefficiencies themselves should warn scientists off of any attempts on humans. Still, as an in-between practice, it is worth considering the ethics of human cloning.

But Neandertal cloning? Not even close.

None of this means that Church can’t speculate away on the possibilities. He just shouldn’t kid himself that he’s engaging in science rather than science fiction.

(h/t: Tyler Cowen)