Secret policeman’s ball

26 02 2014

If you’re an activist and someone pushes you downs a guns & ammo path, you should probably assume that person is cop or spy.

And yes, I’ve mentioned this before: If someone in your group promotes any kind of violence, you should ask, loudly and publicly, Are you a cop?

Even if the person is just an idiot (as opposed to agent provocateur), by calling him or her out you highlight how the police/feds have attempted to short-circuit activist movements by pushing them toward violence, and illegitimacy.

This isn’t paranoia; it’s just good sense.


I turn to my computer like a friend

24 02 2014

This isn’t creepy at all:

Language, [Ray Kurzweil] believes, is the key to everything. “And my project is ultimately to base search on really understanding what the language means. When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.”


Google will know the answer to your question before you have asked it, he says. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.

Nope, not the least bit creepy.

Or it would be if it weren’t horseshit.

Yeah, yeah— “Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity.”—but brute force isn’t always for the win. And a bit of code which allows a computer to understand the documents it scans doesn’t mean that computer will have attained human understanding.

It’s not that I doubt computers can learn in some sense of the word, that it can incorporate algorithms and heuristics which will allow it to attain some kind of understanding of what it learns; I don’t doubt that computer understanding is possible.

It’s just not clear that computer understanding is comparable to human understanding, not least because it’s unclear what human understanding is, and across time and space, becomes.

Human understanding may also incorporate algorithms and heuristics, but I don’t know that it can be reduced to that. It is fragile and unstable and prone to break down, and even when we think we understand, well, maybe we don’t.

And can I mention disagreement in understanding?

Ray Kurzweil is, as the Observer writer Carole Cadwalladr, notes, a “techno-optimist”, someone who believes tech can make turn us all into bionic women and six million dollar men (Better. Stronger. Faster.).

As someone who wears glasses, uses the elevator to trundle my overstuffed laundry bag down a couple of floors, and likes to sit back and watch Leverage on my computer, I ain’t anti-tech, far from it.

But I am a skeptic. Especially of the idea that tech will allow us to escape the human condition.

Maybe someday we will no longer be human, we will be immortal or transformed or perhaps we will truly have figured out some way to transcend the immanent. Perhaps someday we will escape being—we will no longer be.

Actually, we already can achieve that: it’s called dying. But I don’t think that’s what Kurzweil has in mind.


h/t HuffPo


Burn baby burn

22 02 2014

I am glad that other people pay attention to Megan McCardle so that I don’t have to.

Do click over (I can’t embed the clip): it’s only 49 seconds, and the last few seconds are so, so worth all the bs before it.

Jon Chait brings the burn; satisfying, isn’t it?

Anyway, as to the bs: McCardle and other catastrophists seem to think that everyone is either completely healthy or has just been hit by a truck.

No need for insulin or levothryoxin or physical therapy or psychotherapy or amoxicillin or amitriptyline or blood work or prenatal tests or mammograms any other of the non-catastrophic types of care which keep a problem from becoming catastrophic.

Oh, and make you feel all right, too. Yeah, that.

I don’t even know why the catastrophists bother with defending even that minimalist plan. I mean, if you think it’s good that folks aren’t bankrupted over the big things, what’s the problem with making sure they’re not bankrupted over the small things? If you think folks should get care, well, then why not make sure they can actually get care?

Unless you don’t really care that people can’t get care and aren’t willing to say fuckemall.

h/t Fred Clark


19 02 2014

Why is it those who yell loudest about hewing to tradition care the least about history?

I know, I know. . . .

She blinded me with science

17 02 2014

When to let go and when to hang on?

This is one of the conundrums ways I’ve come to interpret various situations in life big and small. I don’t know that there is ever a correct decision (tho’ I’ll probably make the wrong one), but one chooses, nonetheless.

Which is to say: I choose to hang on to the “science” in political science.

I didn’t always feel this way, and years ago used to emphasize that I was a political theorist, not a political scientist. This was partly due to honesty—I am trained in political theory—and partly to snobbery: I thought political theorists were somehow better than political scientists, what with their grubbing after data and trying to hide their “brute empiricism” behind incomprehensible statistical models.

Physics envy, I sniffed.

After awhile the sniffiness faded, and as I drifted into bioethics, the intradisciplinary disputes faded as well. And as I drifted away from academia, it didn’t much matter anymore.

So why does it matter now?

Dmf dropped this comment after a recent post—

well “science” without repeatable results, falsifiability, and some ability to predict is what, social? lot’s of other good way to experiment/interact with the world other than science…

—and my first reaction was NO!

As I’ve previously mentioned, I don’t trust my first reactions precisely because they are so reactive, but in this case, with second thought, I’ma stick with it.

What dmf offers is the basic Popperian understanding of science, rooted in falsifiability and prediction, and requiring some sort of nomological deductivism. It is widespread in physics, and hewed to more or less in the other natural and biological sciences.

It’s a great model, powerful for understanding the regularities of non-quantum physics and, properly adjusted, for the biosciences, as well.

But do you see the problem?

What dmf describes is a method, one of a set of interpretations within the overall practice of science. It is not science itself.

There is a bit of risk in stating this, insofar as young-earth creationists, intelligent designers, and sundry other woo-sters like to claim the mantle of science as well. If I loose science from its most powerful method, aren’t I setting it up to be overrun by cranks and supernaturalists?


The key to dealing with them is to point out what they’re doing is bad science, which deserves neither respect in general nor class-time in particular. Let them aspire to be scientists; until they actually produce a knowledge which is recognizable as such by those in the field, let them be called failures.

Doing so allows one to get past the no-good-Scotsman problem (as, say, with the Utah chemists who insisted they produced cold fusion in a test tube: not not-scientists, but bad scientists), as well as to recognize that there is a history to science, and that what was good science in one time and place is not good in another.

That might create too much wriggle room for those who hold to Platonic notions of science, and, again, to those who worry that this could be used to argue for an “alternative” physics or chemistry or whatever. But arguing that x science is a practice with a history allows the practitioners of that science to state that those alternatives are bunk.

But back to me (always back to me. . . ).

I hold to the old notion of science as a particular kind of search for knowledge, and as knowledge itself. Because of that, I’m not willing to give up “science” to the natural scientists because those of us in the social sciences are also engaged in a particular kind of search for knowledge. That it is not the same kind of search for the same kind of knowledge does not make it not-knowledge, or not-science.

I can’t remember if it was Peter Winch or Roger Trigg who pointed out that the key to good science was to match the method to the subject: what works best in physics won’t necessarily work best in politics. The problem we in the social sciences have had is that our methods are neither as unified nor as powerful as those in the natural sciences, and that, yes, physics envy has meant that we’ve tried to import methods and ends  which can be unsuitable for learning about our subjects.

So, yes, dmf, there are more ways of interacting with the world than with science. But there are also more ways of practicing science itself.

We just have to figure that out.

It’s raining again

15 02 2014

Snowing, actually.

Which pleases me: snowing and winter go together.

(Unlike rain. Thursday it snowed—big, puffy, beautiful swirling flakes—and then it rained, melting those beautiful puffs into slush. February rain sucks.)

Anyway, I used to mock folks in southern climes who freaked out when they got an inch or two of snow–ha ha! Look at those fools spin out!—but I’ve mostly gotten over my weather superiority complex. I mean, I decompensate when the temp climbs hellward of 85 or 90, so who am I to lord it over those who shiver below 40 degrees?

And laughing at the Georgians or Carolinians who slide into barely-snowy ditches requires one to forget that everyone is an idiot during the first snowfall.

I didn’t truly appreciate this until after I moved to Minneapolis and got my first car (Plymouth Horizon hatchback, RIP: gave its life after a long road trip west). Yes, I drove when I lived in Wisconsin and of course learned to do doughnuts (easier on a rear- than front-wheeled car), and helped push more than one car out of snowbank. (I don’t remember if I ever drove into a snowbank; if not, that had more to do with luck than skill.)

Anyway, now that I was living in a city and driving my own car and paying my own insurance, I also paid more attention to those many other drivers as well as to my own driving. And I noticed that every November (or October: see Minneapolis) when the first snow fell, drivers acted as if they had never before had to deal with this outrageous phenomenon of icy dust billowing down from the clouds.

They drove too fast. They braked too late, and then stood on the brakes as their cars veered sideways down the street. They drove too closely to one another. And—my personal favorite—they’d only clear a portion of the front window and maybe, maybe, a bit in the back before hitting the road.

That’s some smart driving, right there.

After the first snowfall or two, however, most drivers would get the hang of it, as if some part of their brains awoke from their brief warm-weather comas to say “hey, dummy, watch out!”, and they remembered to clear off all of the window and the lights and drive as if snow and ice were, y’know, slippery.

Or just not drive at all. That was my preferred method for dealing with big snow: stay off the road until the plows came thru.

Of course, one could be cautious and still SOL. It might snow when you’re out, or you might have to drive, and in Minneapolis the side streets and sometimes even the main drags wouldn’t be plowed down to pavement, such that driving was sketchy long after a storm ended.

And sometimes you do everything right and it still goes wrong. I remember one night driving down a small hill on Franklin Avenue toward the intersection at Third Avenue, stepping on the brakes, and having the car completely ignore the instructions to stop. I pumped the brakes, steered the car straight, but no dice.

The light turned red, but that wasn’t going to stop me.

So I did the only thing I could do: I laid on the horn as a warning to drivers on Third and slid right on thru that intersection. Luckily no one was in front of me, so the drivers on Third simply watched my Plymouth ski on by before motoring forth.

No one got hurt, and nothing happened. Lucky.

Upshot: snow fucks everything up, and it takes experience (as well as snow plows and salt and sand trucks) to deal with that fucked-up-ness. Folks in the north get plenty of chances to learn, so it’s easy to feel smug about southerners who will get only one or two shots every couple of years to get it right.

We shouldn’t. Because everyone’s an idiot driving in the first snow, and even the experienced need luck sometimes.

Bad to the bone

12 02 2014

Good christ, do I make bad decisions.

It’s kind of astonishing how many truly bad decisions I have made, and how completely fucking clueless I am at the time I’ve made them that almost any other decision would have been better than the one I do go with.

I’m not a stupid person, so you’d think I’d have a handle on this decision-making thing. And I can be pretty good at helping someone else make decisions that make sense for them; then again, I’m not the one actually making those sensible decisions, so maybe it works out in spite rather than because of me.

And it’s not like these bad decisions lead to crazyfuntimes. Oh, they did, sometimes, when I was younger, when bad decisions were confined to evening or weekend plans and usually involved some sort of intoxicant: hanging on the bumper of Y’s car and skiing down the street in my topsiders; getting stoned in a stranger’s basement then rifling thru the cupboards for hard rolls and peanut butter; bringing approximately 100 times more booze than food on a camping trip to Mauthe Lake; accidentally starting a paper tablecloth on fire at Country Kitchen, and wrapping toilet paper around our heads and dancing thru the restaurant singing “Hare Krisna”.

(This last bit was a group effort—I don’t know that I was actually the one who tipped over the candle; in any case, I’ve been making up how awful we were to those waitresses by overtipping wait staff ever since.)

No, it was only when the stakes got larger did the decisions get both worse and less fun.

I started at Madison with the intention of majoring in political science and becoming a journalist. I declared the major early, and starting working at The Daily Cardinal my first semester. So far, so good.

But then I got to thinking that maybe I wasn’t cut out for journalism (even though I was totally cut out for journalism), and started snuffling around for something else to do.

Hence grad school.

No need to rehash my each and every bad grad decision, but you can be sure they were there and I diminished my prospects with each and every one.

(You want an example? I had a couple of editors sniffing around my dissertation, and one who made serious overtures to me to turn it into a book. Do you need to guess what I did? Nothing, that’s what I did.)

Blew thru two post-docs—two very good post-docs, with great colleagues and great conditions and which could have served as great launching pads for my career—with almost nothing to show for them except a desire to quit academia.

Such fine decisions.

Then the move to the Boston area. Christ. Next.

Then the move to Brooklyn (which involved multiple financially stupid decisions at both ends of the move), more bad job decisions, and, well, here I am.

I’ve known before of the low-quality of my decisions, but always had reasons for their badness: I was depressed, I was really depressed, I was getting over being depressed, I was so used to making bad decisions while depressed that I didn’t know how to make not-bad ones, I could only make decisions based on what I knew at the time. . . . Blah blah.

No, a coupla’ weeks ago I finally owned these shitty decisions, gathered them all into my arms and said Goddamn.

I don’t know what I’m going to do with the full recognition of this bundle of badness; it’s just possible that knowing how terrible I’ve been at making decisions that I’ll try harder to make better ones, that I’ll check myself with a reminder of how badly things have gone before.

Oh, and by checking with people who by simple fact of not being me will offer better counsel to me than I could to myself.

Two more things. One, that I am not stupid has probably helped to mitigate some of the bad effects of the bad decisions. And not every decision I’ve made has been terrible (which may have helped lull me into thinking I was better at this than I am), so while I’m not where I want to be, I’m not at the bottom of the well, either.

Two, I’m not at the bottom of the well. Those bad decisions may have tipped me this way or that, but tipping over isn’t always all bad. Sometimes it’s just not what I expected, and sometimes, the unexpected is all right.

It’s all right.