Thursday, July 31, 2014

URGENT: BLOG MOVING

Be it known: I have renamed this blog, and it shall henceforth be found at agentyduck.blogspot.com, starting August 1st, 2014. Please update your various things that need updating accordingly.

Monday, July 28, 2014

Corrupted Hardware: Stuff I Learned From My Broken Brain

[Content note: This post discusses mental illness, depression, social anxiety, and suicidal thoughts.]

In a Facebook discussion, Brent Dill said, "In my personal experience, those of us Really Smart People with Severe Mental Issues often acquire a sort of 'rationality superpower' to compensate." I've been thinking about this, and I'm pretty sure something like it happened to me.

A foundational insight upon which any art of rationality must stand is an understanding that we run on "corrupted hardware". Our brains are kludgey meat sacks running spaghetti code just good enough to make more kludgey meat sacks. They aren't designed to optimize for our preferences. And it's not really enough to just understand that abstractly. One way or another, that knowledge has to fuse with your soul, or I don't think you can make much headway in rationality.

I've had seasonal affective disorder and social anxiety most of my life. Both got worse as I aged. My social anxiety is gone now, though I still fight with depression in the winter. But I've been much better for the past couple years, thanks to finally going to a doctor and getting a prescription for bupropion. Before that, for two winters particularly, things were very bad.

I notice that I benefited, though, from certain features of the struggle.

While depressed and socially anxious, looking out at the world from the inside, I was routinely ridiculously wrong about many things. I was wrong about how much I'd enjoy anticipated events or how horrible they'd be ("what, why would I want to go for a walk to get chocolate? what even is happiness I recall no such thing"). I was wrong about how other people perceived me ("everyone would be better off if I were dead, and they probably know that but are too nice to say so"). I was wrong about how much terror and pain I could endure before completely collapsing and/or killing myself (I could endure far more than expected). I was wrong about how long the darkness would last (not, in fact, forever, and probably not even through Spring). I was wrong about my capacity to grow ("I am weak and stupid and will be like this forever").

Functioning despite these constant errors required I invent a limited version of reference class forecasting. I would look at the anticipated event (studying for finals, meeting with a professor, teaching a class, etc.), feel the sheer impossibility of it, remember that I'd done similar things before and survived, extrapolate that I'd probably survive this time as well, and then I'd resolve to do the impossible thing. Sometimes I couldn't pull off that reasoning by myself, and I'd ask a friend to explain to me why the opposite of what I believed was true. I'd talk to people who'd known me for a long time, and I'd try to trust their expertise.

(When I say "impossible", I mean the feeling you'd have if you stood before a sheer cliff face and considered whether you could make it to the top in a single leap. That's what scheduling a meeting is like when you're depressed and socially anxious. I am not exaggerating. It's the same experience, except that there may be terrible consequences to not scheduling the meeting. So it's more like you're at the bottom of the cliff considering whether you can jump to the top while a pack of rabid wolves closes in around you.) 

I really got that my internal prediction mechanisms were damaged, and that I needed special tools to compensate. I got it, the knowledge fused with my soul, because my errors were huge enough to stand out compared to the errors of the people around me, huge enough to prevent me from participating normally in human affairs. I felt that I was worthless and hated, while simultaneously recognizing that the people around me not only liked me but admired me and wished to model themselves after me. The evidence so totally contradicted my intuitions that I couldn't pretend my brain was working fine.

As I recovered, my habits stuck around. Many of those habits were very harmful and had to go. Relying on coffee and abandoning all hope of a regular sleep schedule, for instance. Or working until I literally couldn't stand upright because it was one of my only available distractions from the pain.

But some of the habits were useful, and stayed. One such habit was noticing that I might be wrong, especially when I thought I couldn't do something. Another was not giving up just because something seems impossible at first glance. Creating systems to automate as much of my life as possible to conserve my memory, attention, and motivation. Choosing my friends very carefully, communicating openly how I feel, and testing my models of them frequently.

My brain is better now, but only about as good as a normal human brain. And I still automatically expect many of the same errors. For instance, despite my abstract understanding, I notice that I usually don't empathize with people who live in the far distant future, leading strange lives in strange galaxies. And it feels a lot like it did to be depressed and not able to empathize with my best friend who's right in front of me. 

It's obvious to me, because I've seen it so many times before. I went through cycles of sanity and brokenness over and over again. If my brain isn't working correctly, to me, that just means I have to find a work-around. I think a lot of people just accept the limitation when they notice a major error like that, thinking, "Well, there's nothing I can do about that," and go about their lives. 

For a long time, I was trapped, encompassed by things I "couldn't do anything about", that I had to either deal with anyway, or die. Literally. It's amazing what I could find a way to do when "if it doesn't work out, I'll just kill myself" was sitting in the back of my mind reminding me that I have nothing to lose, so I might as well pull out all the stops.

Now, I have some idea of what I can do when there's nothing left in my way, when I decide to actually try. And in addition to the "corrupted hardware" insight, I have a deep intuition that I really can defeat death--because I've done it before. 

Often, it feels impossible to save all 2x10^58th(ish) people who will exist if I give them every star in my future light cone. 

Damned if I'll let that stop me.

Saturday, July 12, 2014

Systems 1?

[These ideas were inspired by/stolen from Nate Sores, aka So8res, in an ongoing email conversation about the Dark Arts of Rationality.]

Summary: There's more than one thing we might mean by "System 1", and the different referents require different rationality techniques.
___________________________________________


I went skydiving once. On the way up, I was scared. Not as scared as I expected to be, but more scared than I thought I should have been. I believed at the time that there was about a 0.0007% chance of dying in a skydiving accident.* In other words, if I and around 150,000 other people all went skydiving, about one of us would die. And that's before taking into account that I was jumping with an expert.

Part of me knew this. Otherwise, I wouldn't have gotten into the plane. But part of me didn't seem to know it, and I knew part of me didn't know it, because I was seriously wondering whether I'd have the guts to jump on my own.** I wanted all of me to understand, so I could experience the excitement without the fear.

So I tried picturing 150,000 people all jumping out of planes. (Dictionary of Numbers helpfully informs me that that's about the population of Guam.) It helped. Then I called to mind the people I knew who had been in car crashes, and remembered how willing I was to climb into a car anyway. My methods weren't as sophisticated then, but I was basically seeking what I've recently been calling a "System 1 handle" to arm System 2's abstract understanding with a clear emotional impact. It was enough, though, to calm my nerves.

We lined up. I was calm. The door opened, and the wind roared. I was calm. The pair in front of me jumped. I was calm. 

The floor disappeared from beneath me. It took me about two seconds to regain enough composure to scream.

Dual process theory is mostly just a quick-and-dirty way of framing cognitive processes. Speaking as though "System 1" and "System 2" are people in my head with very different personalities lets me apply a bunch of useful heuristics more intuitively. I've been fairly good about keeping track of the reality that they aren't *people*. I've been less good about guarding against a false dilemma.

The framing tracks something like "degrees of deliberation". But there's a lot more in that continuum than "very deliberative" and "very not deliberative". I think I've been treating everything below some point on the line as "System 1 processing", and it's simply "everything that I don't think of as System 2".

There seem to be (at least) two natural clusters in the "not System 2" part of the spectrum that might call for different treatment. During skydiving, one cluster responded to vivid, concrete examples. The other cluster was too simple, to instinctual to get a grip on even that. The link between "ground disappears" and "freeze in terror" was too basic to manipulate with the kind of technique I was using. The "oh shit I'm falling" process is a different animal than the one responsible for "this is dangerous and therefore I'm going to die".

The "System 1 translation" techniques I've been writing about are meant to deal with the part-of-yourself-that-you-argue-with-when-you're-trying-to-convince-yourself-to-do-something-difficult, and the part-of-yourself-that-needs-to-remember-important-details-but-doesn't-care-about-numbers-or-other-abstractions. The part that's anxious about the jump and doesn't understand the odds.

But I'm not sure S1 translation does much of anything for the part that panics when you pull the ground out from under it. To deal with that part, I think you probably need tools more along the lines of exposure therapy.

When you're in a car driving on icy roads and you start to slide, the best way to regain control is to steer into the skid and accelerate slightly. But most people's instinctive reaction is to slam on the brakes. I've tried to understand why the steer-into-the-skid method works so I could translate that understanding into the language of System 1, and while I've not thought of anything I expect would  work for most people, I've got something that makes sense to me: When I'm on
icy roads, I can imagine that I'm in a Flintstones car, with my feet scrambling against the ice. If I were in a Flintstones car, my immediate reaction to sliding would be to run a little faster in the direction of the skid in order to gain control. I figure this is probably because I spent some of my childhood playing on frozen ponds, so I wouldn't suggest that translation to just anyone.

But I doubt it would work no matter how robust the translation. The part of my brain that panics and slams on the brakes is more basic than the part that's scared of the whole idea of skydiving, or that resists checking the balance of my bank account. I'm not sure it can be reasoned with, no matter what language I use.

To ensure I do the right thing when driving on icy roads, a much better plan would be to find some way to practice. Find an icy parking lot, and actually expose myself to the experience over and over, until the right response is automatic.

I'm not sure about this, but I'd be at least a little more surprised if S1 translation worked for ice driving than if it didn't. If I'm right, lumping together all the "System 1 techniques" and using them on anything that's "not System 2" can be dangerous. If this is a real distinction, it's an important one for applied rationality.

___________________________________________


*I still believe that, but with less confidence 'cause I'm better calibrated and recognize I haven't done enough research.
**With our setup, I was actually hanging from a harness attached to the expert by the time we were about to leave the plane, so my feet weren't on the floor and I didn't get to jump on my own. Still kinda pissed about that.