1.
I was struck by this passage from Jennifer Khan's CFAR article.
One participant, Michael Gao — who claimed that, before he turned 18, he made $10 million running a Bitcoin mine but then lost it all in the Mt. Gox collapse — seemed appalled when I suggested that the experience might have led him to value things besides accomplishment, like happiness and human connection. The problem, he clarified, was not that he had been too ambitious but that he hadn’t been ambitious enough. "I want to augment the race," Gao told me earnestly, as we sat on the patio. "I want humanity to achieve great things. I want us to conquer death."
Descriptively, Jennifer's prediction is often right. Devoting a lot of resources to a goal and failing does often cause people to not just change tactics, to not just change goals, but to change (or at least re-prioritize) values.
The implications of changing values, whether on purpose or otherwise, has been on my mind a lot recently. It’s a creepy and fascinating phenomenon.
2.
I hazily remember a stretch in college when I was a straight A student. Not just “I had a 4.0”, but “I got an A on literally every graded paper, test, quiz, or assignment”.
The value of academic excellence, and especially of performing beyond the grading system's ability to measure, was a huge part of what I felt myself to be. Then came my first ever test in logic class.
I got a C.
My first reaction was devastation.
My second reaction was rationalization. Would it still count if I dropped the class? Of course it would. And P200 Introductory Logic is a requirement for a philosophy degree anyway.
I don’t have to get a philosophy degree…
What if logic counts as math? I already know I’m Bad At Math, and I don’t take math classes so I don’t have to fail math tests. Maybe this was a math test?
But I took this test anyway and thought I’d pass…
I was in a terribly uncomfortable state of cognitive dissonance for a couple days. Academic excellence was nearly my ultimate criterion, the preference that won over any other preference in a trade off. I sacrificed a lot of important things in service of it: My leisure time, socialization, sleep, mental health, actually learning things instead of just jumping through academic hoops…
And suddenly, my standard of academic excellence seemed forever out of reach.
From the perspective of past me, I had a central value that looked unsatisfyable. In past me’s mind, having an unsatisfiable central value was some sort of unstable state that had to be corrected; I had no choice in the matter, the way a spinning coin has no choice but to come to rest. Therefore, either my belief that I’d lost my straight A status was wrong, or my value was wrong.
I was too intellectually honest to delude myself about the grade itself, even then. So, I took my failure as a lesson that academic excellence wasn’t so important after all, and I should care more about other things.
3.
If you’re like me, that story makes you feel confused.
On the one hand, the sane thing to do - the policy recommended by my reflective equilibrium - was not to pursue academic perfection at the cost of all else. Some other balance of attempted value satisfaction would have yielded higher utility, predictably. So it shouldn’t surprise me that I escaped a local maximum once I stopped doing that.
On the other hand, it’s not the case that I thought, “actually, I can harvest more utils total by sacrificing academic excellence for success in other things”. What I thought, and what actually happened, was that I valued academic excellence less than I used to.
“I can harvest more utils total by sacrificing academic excellence for success in other things” is a thought past me was simply incapable of having. Why is that? I think it’s because it would require believing my central value would not be satisfied.
Provided I must believe my central values will be satisfied, isn’t adjusting my values until they’re satisfiable a wise policy?
And that’s essentially what Jennifer was recommending to Gao, I think. “Your values were ridiculously hard to satisfy; didn’t learning that cause you to adjust your values?”
But I’m glad he didn’t. I wouldn’t have met him at that CFAR workshop, for one thing. But in general, the worlds in which Gao stops valuing things that prove difficult to attain seem sadder to me.
4.
Kierkegaard explores this weird bit of value theory by postulating three kinds of people.
Imagine three peasant men who are hopelessly in love with a princess who will never return their affections, and each of them is fully aware that she’s unattainable.
The first man, recognizing his value cannot be satisfied, abandons his love for the princess. “Such a love is foolishness,” he says. “The rich brewer's widow is a match fully as good and respectable.” He stops valuing the love of the princess, and goes looking for a more easily satisfied value. Kierkegaard calls this person an “aesthete”. (Fair warning, there might be a couple different kinds of people he calls “aesthete”, but I’m only talking about this version here.)
The second man, recognizing his value cannot be satisfied, goes right on loving the princess as much as he always did, and also believes he will get the princess. He believes an outright contradiction: His value will be satisfied, and his value cannot be satisfied. Kierkegaard calls this person the “Knight of Faith”.
The third man, recognizing his value cannot be satisfied, goes right on loving the princess as much as he always did, all the while believing her love is unattainable. This person Kierkegaard calls the “Knight of Infinite Resignation”.
These seem to me to cover the possibility space. Either you stop loving the princess, you do some weird doublethink about the princess, or you truly believe in your own doom.
I’m at least a little concerned by every option here.
The Knight of Faith will have a bad problem if he wants to make accurate predictions about the world, since his epistemology is about as broken as I know how to make a thing. And maybe he doesn’t care about making accurate predictions in order to control the world. But, like, I do.
The aesthete’s perspective sounds sort of reasonable at first, but then I think it through to its necessary conclusion. If my policy says to adjust my values so I prefer the rich brewer’s widow over the princess, then my policy also says to adjust my values so I prefer dirt to the rich brewer’s widow.
Truly is the Way easy for those with tautological utility functions. As the saying goes.
But some people bite this bullet. Here’s a passage from Chapter Five of the Zhuangzi (the Ziporyn translation):
Ziyu said, “How great is the Creator of Things, making me all tangled up like this!” For his chin was tucked into his navel, [and a bunch of other stuff was going wrong with his body due to illness]. But his mind was relaxed and unbothered. He hobbled over to the well to get a look at his reflection. “Wow!” he said. “The Creator of Things has really gone and tangled me up!”
Ziji said, “Do you dislike it?”
Ziyu said, “Not at all. What is there to dislike? Perhaps he will transform my left arm into a rooster; thereby I’ll be announcing the dawn. Perhaps he will transform my right arm into a crossbow pellet; thereby I’ll be seeking out an owl to roast. Perhaps he will transform my ass into wheels and my spirit into a horse; thereby I’ll be riding along - will I need any other vehicle? Anyway, getting it is a matter of the time coming, and losing it is just something else to follow along with. Content in the time and finding one’s place in the process of following along, joy and sorrow are unable to seep in. (…) But it has long been the case that mere beings cannot overpower Heaven. What is there for me to dislike about it?”
In other words, as Sheryl Crow put it, “It’s not having what you want, it’s wanting what you’ve got.” And sometimes, what you’ve got is a chicken for a left arm.
By my reading, the Zhuangzi prescribes either constantly adjusting your values so that they’re always perfectly satisfied by the current state of the world, or not having any values at all, thereby achieving a similar outcome. Most of the practices it references seem to be aimed at accomplishing that.
(I make no claims about whether the Zhuangzi prescribes the opposite as well.)
It’s sort of like wireheading, but it sidesteps the problem wherein your values might involve states of the world instead of just experiences.
5.
I can’t quite tell whether I have a principled objection to this perspective on value policy, though I sure as hell have an unprincipled one.
When I imagine the world where everyone is a perfect Taoist sage, with preferences that perfectly adapt to the state of the world, I feel super not ok with that; it makes me even more uncomfortable than thinking about orgasmia.
In orgasmia, I’m clear on why things are non-awesome: People are ultra happy all the time, but their values haven’t necessarily changed, so anybody who values things besides happiness will never get what they want. And I value people getting what they want.
The Taoist sages, unlike wireheaders, aren’t even happy! A Taoist sage’s mental state is whatever her mental state happen to be -
- which is presumably “extreme suffering”, right up until she dies of starvation. I mean, why would she eat? When she got hungry, she’d value her hunger, never seeking to “overpower Heaven” by trying to change how her stomach felt. If I recall correctly, there’s even a point somewhere in the Zhuangzi where a student asks a teacher precisely this question - Why don’t the sages starve to death? - and the teacher… never really answers. shrug
But! The Taoist sages happen to value exactly whatever mental state they’re in at any moment, since their mental states are part of the world. And they value whatever state the world is in at the moment, even if that happens to be “my left arm is a chicken”, or, “everyone’s starving to death”, or “there’s an asteroid headed toward Earth that will sterilize the entire planet”.
So at the very least, I feel like I can conclude this about the aesthete: Anyone who adjusts their values in response to finding them too hard to satisfy is only being reasonable if they want to be down with their left arm becoming a rooster.
6.
And that brings us to the Knight of Infinite Resignation.
(I’m probably with you about Continental philosophers over all, but you’ve got to admit, they have a flair for the dramatic.)
There was a recent post on Lesswrong in which Anna and Duncan talked about wielding the power of “despair”.
“Despair can be a key that unlocks whole swaths of the territory. When you’re ‘up,’ your current strategy is often weirdly entangled with your overall sense of resolve and commitment — we sometimes have a hard time critically and objectively evaluating parts C, D, and J because flaws in C, D, and J would threaten the whole edifice. But when you’re ‘down,’ the obviousness of your impending doom means that you can look critically at your past assumptions without having to defend anything.”
Before Eliezer fixed my Seasonal Affective Disorder by replacing our entire apartment with light bulbs, I spent a lot of time depressed. When I was depressed, my beliefs about whether my values could ever be satisfied were often wrong. I often believed, for instance, that I’d never again feel happy.
But even though I was wrong, the ease with which those thought floated through my mind is notable.
If it were the case that I’d never again be happy, and I encountered strong evidence of that, I’d have experienced no resistance at all to updating toward the truth, even though I valued happiness highly (despite being unable to remember what it felt like.)
The Knight of Infinite Resignation is not necessarily depressed, yet he can do a thing past me could not do when she got a C, and could only ever do from the depths of despair: He encounters evidence that his values cannot be satisfied, and he updates. Simple as that. No great spiritual battle, no rationalization, no resistance to seeing the world as it is. Just, “Oh, I guess I’m doomed, then.” And he goes on believing that, forever, unless contrary evidence convinces him otherwise.
The Knight of Infinite Resignation is epistemically stronger than most of us - that is, he has greater power to make accurate predictions that allow him to control the world. Maybe he feels despair in response to his revelation of doom - appropriate, I think - but he doesn’t need to be in despair to have the revelation in the first place.
Still, this Knight also disturbs me, in the limit.
Imagine you have exactly one value: the princess’s love. You find out she’ll never love you back no matter what. You don’t deceive yourself into believing she’ll somehow love you anyway, so you know your one and only value will never be satisfied.
Now do you want to be down with your left arm becoming a chicken?
7.
I strive to wield the power of despair without having to be depressed. I would like to be able to believe that I am doomed when I am doomed, else I’ll resist believing that I am in danger when doing so would let me prevent harm.
Also, I strive not to believe contradictions, or to rationalize, or to play other strange games with myself that let conflicting beliefs hide in separate corners of my mind.
Also also, I don’t want to be down with my left arm becoming a chicken, or with an asteroid destroying the Earth.
So, now what?
[ETA: Someone asked for clarification on my issue with the Knight of Infinite Resignation, since the Knight of IR seemed to them to be a correct thing to want to be. Here's an abstract summary: My issue with with Knight of IR is that if I built a person from scratch, I would not give them unsatisfiable values, from which I infer that I would prefer people not end up with unsatisfiable values. If I would prefer people not end up with unsatisfiable values, then (I think?) I must also prefer that people who end up with unsatisfiable values later end up without them. And if I'd prefer their values change by accident in that situation, I must also condone that people change their values on purpose if they develop an unsatisfiable value. But if I think people should change their values when they discover them to be unsatisfiable, then I think people should want to be Taoist sages. And I don't want people to be Taoist sages.]
[Edit edit: You know, I think I'm actually just wrong here, and people should be Potential Knights of Infinite Resignation. I guess a lot of the Sequences is basically a handbook on how to become a Potential Knight of Infinite Resignation. But I'm still confused about things involving changing values.]