Sunday, January 17, 2016

When Your Left Arm Becomes A Chicken

1.

I was struck by this passage from Jennifer Khan's CFAR article.

One participant, Michael Gao — who claimed that, before he turned 18, he made $10 million running a Bitcoin mine but then lost it all in the Mt. Gox collapse — seemed appalled when I suggested that the experience might have led him to value things besides accomplishment, like happiness and human connection. The problem, he clarified, was not that he had been too ambitious but that he hadn’t been ambitious enough. "I want to augment the race," Gao told me earnestly, as we sat on the patio. "I want humanity to achieve great things. I want us to conquer death."

Descriptively, Jennifer's prediction is often right. Devoting a lot of resources to a goal and failing does often cause people to not just change tactics, to not just change goals, but to change (or at least re-prioritize) values.

The implications of changing values, whether on purpose or otherwise, has been on my mind a lot recently. It’s a creepy and fascinating phenomenon.

2.

I hazily remember a stretch in college when I was a straight A student. Not just “I had a 4.0”, but “I got an A on literally every graded paper, test, quiz, or assignment”.

The value of academic excellence, and especially of performing beyond the grading system's ability to measure, was a huge part of what I felt myself to be. Then came my first ever test in logic class.

I got a C.

My first reaction was devastation.

My second reaction was rationalization. Would it still count if I dropped the class? Of course it would. And P200 Introductory Logic is a requirement for a philosophy degree anyway.

I don’t have to get a philosophy degree…

What if logic counts as math? I already know I’m Bad At Math, and I don’t take math classes so I don’t have to fail math tests. Maybe this was a math test?

But I took this test anyway and thought I’d pass…

I was in a terribly uncomfortable state of cognitive dissonance for a couple days. Academic excellence was nearly my ultimate criterion, the preference that won over any other preference in a trade off. I sacrificed a lot of important things in service of it: My leisure time, socialization, sleep, mental health, actually learning things instead of just jumping through academic hoops…

And suddenly, my standard of academic excellence seemed forever out of reach.

From the perspective of past me, I had a central value that looked unsatisfyable. In past me’s mind, having an unsatisfiable central value was some sort of unstable state that had to be corrected; I had no choice in the matter, the way a spinning coin has no choice but to come to rest. Therefore, either my belief that I’d lost my straight A status was wrong, or my value was wrong.

I was too intellectually honest to delude myself about the grade itself, even then. So, I took my failure as a lesson that academic excellence wasn’t so important after all, and I should care more about other things.

3.

If you’re like me, that story makes you feel confused.

On the one hand, the sane thing to do - the policy recommended by my reflective equilibrium - was not to pursue academic perfection at the cost of all else. Some other balance of attempted value satisfaction would have yielded higher utility, predictably. So it shouldn’t surprise me that I escaped a local maximum once I stopped doing that.

On the other hand, it’s not the case that I thought, “actually, I can harvest more utils total by sacrificing academic excellence for success in other things”. What I thought, and what actually happened, was that I valued academic excellence less than I used to.

“I can harvest more utils total by sacrificing academic excellence for success in other things” is a thought past me was simply incapable of having. Why is that? I think it’s because it would require believing my central value would not be satisfied.

Provided I must believe my central values will be satisfied, isn’t adjusting my values until they’re satisfiable a wise policy?

And that’s essentially what Jennifer was recommending to Gao, I think. “Your values were ridiculously hard to satisfy; didn’t learning that cause you to adjust your values?”

But I’m glad he didn’t. I wouldn’t have met him at that CFAR workshop, for one thing. But in general, the worlds in which Gao stops valuing things that prove difficult to attain seem sadder to me.

4.

Kierkegaard explores this weird bit of value theory by postulating three kinds of people.

Imagine three peasant men who are hopelessly in love with a princess who will never return their affections, and each of them is fully aware that she’s unattainable.

The first man, recognizing his value cannot be satisfied, abandons his love for the princess. “Such a love is foolishness,” he says. “The rich brewer's widow is a match fully as good and respectable.” He stops valuing the love of the princess, and goes looking for a more easily satisfied value. Kierkegaard calls this person an “aesthete”. (Fair warning, there might be a couple different kinds of people he calls “aesthete”, but I’m only talking about this version here.)

The second man, recognizing his value cannot be satisfied, goes right on loving the princess as much as he always did, and also believes he will get the princess. He believes an outright contradiction: His value will be satisfied, and his value cannot be satisfied. Kierkegaard calls this person the “Knight of Faith”.

The third man, recognizing his value cannot be satisfied, goes right on loving the princess as much as he always did, all the while believing her love is unattainable. This person Kierkegaard calls the “Knight of Infinite Resignation”.

These seem to me to cover the possibility space. Either you stop loving the princess, you do some weird doublethink about the princess, or you truly believe in your own doom.

I’m at least a little concerned by every option here.

The Knight of Faith will have a bad problem if he wants to make accurate predictions about the world, since his epistemology is about as broken as I know how to make a thing. And maybe he doesn’t care about making accurate predictions in order to control the world. But, like, I do.

The aesthete’s perspective sounds sort of reasonable at first, but then I think it through to its necessary conclusion. If my policy says to adjust my values so I prefer the rich brewer’s widow over the princess, then my policy also says to adjust my values so I prefer dirt to the rich brewer’s widow.

Truly is the Way easy for those with tautological utility functions. As the saying goes.

But some people bite this bullet. Here’s a passage from Chapter Five of the Zhuangzi (the Ziporyn translation):

Ziyu said, “How great is the Creator of Things, making me all tangled up like this!” For his chin was tucked into his navel, [and a bunch of other stuff was going wrong with his body due to illness]. But his mind was relaxed and unbothered. He hobbled over to the well to get a look at his reflection. “Wow!” he said. “The Creator of Things has really gone and tangled me up!”
Ziji said, “Do you dislike it?”
Ziyu said, “Not at all. What is there to dislike? Perhaps he will transform my left arm into a rooster; thereby I’ll be announcing the dawn. Perhaps he will transform my right arm into a crossbow pellet; thereby I’ll be seeking out an owl to roast. Perhaps he will transform my ass into wheels and my spirit into a horse; thereby I’ll be riding along - will I need any other vehicle? Anyway, getting it is a matter of the time coming, and losing it is just something else to follow along with. Content in the time and finding one’s place in the process of following along, joy and sorrow are unable to seep in. (…) But it has long been the case that mere beings cannot overpower Heaven. What is there for me to dislike about it?”

In other words, as Sheryl Crow put it, “It’s not having what you want, it’s wanting what you’ve got.” And sometimes, what you’ve got is a chicken for a left arm.

By my reading, the Zhuangzi prescribes either constantly adjusting your values so that they’re always perfectly satisfied by the current state of the world, or not having any values at all, thereby achieving a similar outcome. Most of the practices it references seem to be aimed at accomplishing that.

(I make no claims about whether the Zhuangzi prescribes the opposite as well.)

It’s sort of like wireheading, but it sidesteps the problem wherein your values might involve states of the world instead of just experiences.

5.

I can’t quite tell whether I have a principled objection to this perspective on value policy, though I sure as hell have an unprincipled one.

When I imagine the world where everyone is a perfect Taoist sage, with preferences that perfectly adapt to the state of the world, I feel super not ok with that; it makes me even more uncomfortable than thinking about orgasmia.

In orgasmia, I’m clear on why things are non-awesome: People are ultra happy all the time, but their values haven’t necessarily changed, so anybody who values things besides happiness will never get what they want. And I value people getting what they want.

The Taoist sages, unlike wireheaders, aren’t even happy! A Taoist sage’s mental state is whatever her mental state happen to be -

- which is presumably “extreme suffering”, right up until she dies of starvation. I mean, why would she eat? When she got hungry, she’d value her hunger, never seeking to “overpower Heaven” by trying to change how her stomach felt. If I recall correctly, there’s even a point somewhere in the Zhuangzi where a student asks a teacher precisely this question - Why don’t the sages starve to death? - and the teacher… never really answers. shrug

But! The Taoist sages happen to value exactly whatever mental state they’re in at any moment, since their mental states are part of the world. And they value whatever state the world is in at the moment, even if that happens to be “my left arm is a chicken”, or, “everyone’s starving to death”, or “there’s an asteroid headed toward Earth that will sterilize the entire planet”.

So at the very least, I feel like I can conclude this about the aesthete: Anyone who adjusts their values in response to finding them too hard to satisfy is only being reasonable if they want to be down with their left arm becoming a rooster.

6.

And that brings us to the Knight of Infinite Resignation.

(I’m probably with you about Continental philosophers over all, but you’ve got to admit, they have a flair for the dramatic.)

There was a recent post on Lesswrong in which Anna and Duncan talked about wielding the power of “despair”.

“Despair can be a key that unlocks whole swaths of the territory. When you’re ‘up,’ your current strategy is often weirdly entangled with your overall sense of resolve and commitment — we sometimes have a hard time critically and objectively evaluating parts C, D, and J because flaws in C, D, and J would threaten the whole edifice. But when you’re ‘down,’ the obviousness of your impending doom means that you can look critically at your past assumptions without having to defend anything.”

Before Eliezer fixed my Seasonal Affective Disorder by replacing our entire apartment with light bulbs, I spent a lot of time depressed. When I was depressed, my beliefs about whether my values could ever be satisfied were often wrong. I often believed, for instance, that I’d never again feel happy.

But even though I was wrong, the ease with which those thought floated through my mind is notable.

If it were the case that I’d never again be happy, and I encountered strong evidence of that, I’d have experienced no resistance at all to updating toward the truth, even though I valued happiness highly (despite being unable to remember what it felt like.)

The Knight of Infinite Resignation is not necessarily depressed, yet he can do a thing past me could not do when she got a C, and could only ever do from the depths of despair: He encounters evidence that his values cannot be satisfied, and he updates. Simple as that. No great spiritual battle, no rationalization, no resistance to seeing the world as it is. Just, “Oh, I guess I’m doomed, then.” And he goes on believing that, forever, unless contrary evidence convinces him otherwise.

The Knight of Infinite Resignation is epistemically stronger than most of us - that is, he has greater power to make accurate predictions that allow him to control the world. Maybe he feels despair in response to his revelation of doom - appropriate, I think - but he doesn’t need to be in despair to have the revelation in the first place.

Still, this Knight also disturbs me, in the limit.

Imagine you have exactly one value: the princess’s love. You find out she’ll never love you back no matter what. You don’t deceive yourself into believing she’ll somehow love you anyway, so you know your one and only value will never be satisfied.

Now do you want to be down with your left arm becoming a chicken?

7.

I strive to wield the power of despair without having to be depressed. I would like to be able to believe that I am doomed when I am doomed, else I’ll resist believing that I am in danger when doing so would let me prevent harm.

Also, I strive not to believe contradictions, or to rationalize, or to play other strange games with myself that let conflicting beliefs hide in separate corners of my mind.

Also also, I don’t want to be down with my left arm becoming a chicken, or with an asteroid destroying the Earth.

So, now what?

[ETA: Someone asked for clarification on my issue with the Knight of Infinite Resignation, since the Knight of IR seemed to them to be a correct thing to want to be. Here's an abstract summary: My issue with with Knight of IR is that if I built a person from scratch, I would not give them unsatisfiable values, from which I infer that I would prefer people not end up with unsatisfiable values. If I would prefer people not end up with unsatisfiable values, then (I think?) I must also prefer that people who end up with unsatisfiable values later end up without them. And if I'd prefer their values change by accident in that situation, I must also condone that people change their values on purpose if they develop an unsatisfiable value. But if I think people should change their values when they discover them to be unsatisfiable, then I think people should want to be Taoist sages. And I don't want people to be Taoist sages.]

[Edit edit: You know, I think I'm actually just wrong here, and people should be Potential Knights of Infinite Resignation. I guess a lot of the Sequences is basically a handbook on how to become a Potential Knight of Infinite Resignation. But I'm still confused about things involving changing values.]

6 comments:

entirelyuseless said...

I would advocate being a higher order Taoist sage: you are not so content with the world that you do not eat, but you are content that the world is such that you go and eat when you are hungry. And likewise, you are content that the world is such that you will not attain the princess.

edgewitch said...

I'm not entirely sure I can imagine having exactly one value, exactly one possible satisfactory outcome.

Wouldn't it be just as reasonable for the student, forced to acknowledge that academic perfection is no longer in the realm of possibility, instead of deciding to put their resources toward other ends, put some resources toward evaluating what went wrong that they expected that they would do well and did not. Or, to treat failure not as a reason to change their goals but as a reason to change their tactics.

Is there a difference, in principle, between abandoning a value and adding a new value? What about between abandoning a value and elevating the priority of a different value? What about between adjusting priority and deciding to pursue a less satisfactory (but more possible) value?

I'm not sure I see why the resigned peasant shouldn't acknowledge that his One True Love will never love him, and then go on to decide that he can still have a meaningful relationship with someone else, even if it will not be as meaningful as he imagines his relationship with the princess would be in the counterfactual reality where his affection was returned. I suppose this assumes that there is another value (desire for a meaningful relationship) motivating his desire for the princess to return his feelings, though.

It's worth making a distinction between impossible and difficult. It is difficult to do extraordinarily well in academic pursuits. It is impossible for the student, having done poorly on one test, to continue considering herself a "perfect straight A student". It is impossible (or perhaps merely unethical) for the peasant to change how the princess feels about him. It is not impossible (perhaps not even difficult) for the sage to eat something.

Hm. The idea of a single, central core value is still striking me as Very Weird. So it kind of makes sense that the idea of people having values that cannot be satisfied doesn't bother me as much as it seems to bother you, because I can't imagine not having other values that can be pursued and satisfied. My other intuitive response is to re-interpret the impossible value as something more general that might be maximally satisfied by the old goal, but can still be partially satisfied. (meaningful relationships, extraordinary (but not perfect) academic excellence).

Can't shake the feeling that I've missed the point and am coming at your idea from the wrong level of abstraction.

Chrysophylax said...

A person with a single unsatisfiable value can't willingly change that value except as a way to get closer to their goal. If he only thing I value is the love of the princess, I can't decide to start valuing the brewer's widow instead, because my preferences say nothing about her. A person with a single unsatisfiable preference is just a conscious form of dolorium, and should be re-written or deleted immediately. (Remember that they don't have a preference for their own continued and unedited existence, except insofar as that leads to satisfying their sole value.)

There's a very important difference between changing your preferences and optimising given your preferences - but only when you consider absurd simplifications. If you actually have a complex structure of preferences and meta-preferences, it seems legitimate to say "I prefer X, but I prefer not to prefer X" or "I prefer X and ~X, so I choose (using some other preference or meta-preference) to resolve my conflict in favour of the one that's easier to achieve", or indeed "I prefer X and Y, but don't have the resources to get both, so I choose to get Y and self-modify to not prefer X, because I prefer to maximise my utility".

Having complex values is only important as an instrumental goal. If you wouldn't invent a thousand new kinds of suffering, each more torturous than the last, your real preference is for complex values that get satisfied all the time or in interesting patterns, not just complex values.

Greg Perkins said...

Perhaps it is not so much "changing" values as realizing that one's prior estimations of their values were inaccurate. Some Taoist sage (after reading Jung) might suggest that by letting apparently "external" circumstance "change" one's values, in reality one is coming to a more accurate understanding of the structure of their unconscious self. One needn't be too worried about failing to eat -- that appears to be a low probability occurrence. Sure, some mystic sages do wander off and sit under trees for weeks without eating and scare everyone, but it's pretty unlikely that any individual would end up in that position.

Another note is that it is perhaps impossible to *purely* exercise any of the three strategies you describe. So one is in a constant drift between them, and if one ever finds oneself too close to one strategy or another, a proper way of making progress would be to overcorrect towards a different strategy.

Alleged Wisdom said...

This is the kind of problem that is very hard if you are deontological and very easy if you are a consequantialist. 'Never change values' and 'always change values' are both very problematic. The correct solution is 'sometimes change values'.

For any value you night have, there is a cost of changing that value and a cost of satisfying that value. The value 'do not starve' is very expensive to change and very cheap to satisfy. The value 'have the princess love me' is very expensive to satisfy and usually pretty cheap to change. The optimal strategy is to do whatever is cheapest.

Or, as the saying goes:
"Grant me the serenity to accept the things I cannot change,
The courage to change the things I can,
And the wisdom to know the difference."

Unknown said...

Could it be that the skills one learns on the Taoist sage path are the tools one wields in order to survive life as the Knight of Infinite Resignation?

Non-sequitur:
The line "replacing our entire apartment with light bulbs" is fantastic. Do you now call it your Fortress of Lumens? :)