Wednesday, August 20, 2014

Small, Consistent Effort: Uncharted Waters In the Art of Rationality

Summary: I predict that there are powerful secrets yet to be uncovered in the area of rationality skills that are fairly easy but take a long time to learn.

I've been thinking about what sorts of things rationality skills are, and how they are gained. By "rationality skills", I mean patterns of thought and feeling that contribute to systematic improvement of the accuracy of beliefs, and of the satisfaction of values. 

The sort of categorization that most interests me is based on how the skills are acquired. I imagine a grid of rationality skill acquisition. It looks like this.


Things farther to the left take less time to learn, while things farther to the right require some combination of processing time, many iterations, and long strings of dependencies on other skills that must be acquired serially. While "difficult" and "takes a long time to learn" may be highly correlated, I don't think they're the same thing.

It can take a child quite a while to learn long division. You generally need to lean addition in order to learn subtraction and multiplication, multiplication in order to learn division, and the final procedure that leads to the right answer, which depends on multiplication and subtraction (and division, if you want to be efficient). All together that can take a long time.

But once you've got all the pieces of basic arithmetic, the final procedure is pretty easy. If you've got detailed instructions in front of you, it can even be carried out correctly on the very first try. And the pieces themselves are pretty straightforward, especially if you recognize that the execution of algorithms will suffice, and deep understanding isn't strictly necessary. It may be a long and complex process if you've never seen arithmetic before, but the greatest inferential gap is either between addition and multiplication or between multiplication and division. Those are leaps average gradeschoolers can make. No individual part is all that difficult to get your head around.

But consider the simplest problems in elementary algebra. In addition to the basic arithmetic operations, you need two more pieces: "doing the same thing to both sides of the equals sign", and "variable". "Doing the same thing to both sides of the equals sign" is a even easier than "the procedure for long division".

But "variable" is fundamentally different. It requires a new kind of idea. It requires abstraction, which is not only new but inferentially distant. It may even be the greatest inferential gap a child must cross in traditional math education up to pre-calculus. It isn't a complex idea, though, and there's not really such a thing as "half-way understanding variable". You get it or you don't, and when you get it, elementary algebra suddenly makes sense. "Variable" is probably an epiphany. And it's a difficult enough epiphany that, according to Jo Boaler, a great many adults never do have it.

I think the Lesswrong Sequences are mostly good for a few epiphanies. They're largely boot-strapping sorts of epiphanies, which re-order your mind in ways that allow for further epiphanies. But they're still epiphanies. They're skills that are difficult to gain but happen all at once, in this case over the course of reading a blog post. They're mostly things of the form "understanding X" or "realizing that Y". And most of the potential lessons of the sequences are fairly difficult unless you happen to have a mind with exactly the right arrangement, which is part of why most people don't have their whole mind rearranged once per post. So the Sequences mostly exist in the upper left corner of the skill acquisition grid.

CFAR workshops occupy the whole left half of the grid. Most of what's taught in the actual classes falls in the bottom left--quick and easy--because the lessons are only fifty minutes, and they're mostly practical instead of conceptual. Rather than lecturing you for an hour, as though reading several Sequence posts aloud, they're more like, "Here is a procedure that is surprisingly domain-generally useful. Let's practice."

For example, CFAR teaches Trigger-Action Planning, known in the Cog Sci literature as "implementation intentions". It's got even more bang for the effortful buck than memory palaces, because the effect size is similarly enormous, but it helps with anything that can be broken down into concrete triggers and concrete actions. And all it takes is learning to compose specific enough if-then statements, like so: "If I hear my alarm in the morning, then I will hop out of bed immediately." Other bug patches CFAR installs include Murphey Jitsu, Goal Factoring, Focused Grit, and Againstness. (Don't worry, I'll discuss exceptions to this in a minute.)

The rest of the CFAR experience, the socialization outside of classes, usually causes at least one epiphany. Participants have conversations with instructors and other participants, and since everybody there is carefully selected to be bright, curious, and interesting in diverse ways, there's always somebody saying, "Wow, I've never thought of that!"

CFAR teaches one lesson from the bottom right quadrant: Comfort Zone Expansion, or CoZE. CoZE is basically CFAR's take on exposure therapy. Exposure therapy can take a long time. Though you might see progress right away, you're usually not going to wipe out a deep fear or anxiety in a single go. It takes repeated exposure with a slow and steady increase in intensity.

But exposure therapy is fairly easy! Scary, though by design not very scary, but not difficult. The principle is not hard to understand, the procedure is straightforward, and there's just not much more to it than that. It takes time, is all. So CFAR devotes a lot more time to CoZE than to the other units. There's a standard 50minute CoZE prep class, and there's an entire evening devoted to the "CoZE outing", where everybody goes off for hours in search of repeated exposure to a feared stimulus. CoZE is a tortoise skill. "Slow and steady wins the race." It relies almost entirely on small, consistent efforts.

Some of CFAR's other lessons may be close to the middle of the X axis, but I don't think there are any others that must necessarily take many iterations to properly install.

There is one skillset CFAR attempts to impart in a class format that I think falls in the top right quadrent: Bayesian reasoning. It is not merely an epiphany, and if you want a version that works in real life, it is not a bug patch. When last I saw it (June 2014), the Bayes unit was not up to the same standard as the Bug Patch units or CoZE, and I think I may now understand a big chunk of why.

Bayesian reasoning depends on some pretty mind-twisty habits of thought. Not only are the skills difficult to attain, but they require a combination of long processing time, many iterations, and long strings of dependencies. It takes a couple epiphanies, a few bug patches, lots of habit installation, and the long and difficult process of weaving all of that together into fully Bayesian patterns of thought and feeling. A two-hour class is simply not the right format to get all of that done.

[CFAR does offer six weeks of 1-on-1s for all participants, so there's more room for imparting Tortoise skills than the workshop itself allows. But those are extremely personalized, more like counseling than the usual sort of teaching, and it's hard for them to scale in the same way as the Sequences or the standard batch of CFAR units, so I'm not discussing those so much.]

Wizard skillsets like Bayesian reasoning are definitely possible to attain. I think almost all of it, if not all of it, happens by acquiring components from the other three quadrants and weaving them together over time. If there are rationality skills that primitively require slow and difficult aquisition, I don't know what they are. Most of the really badass epistemic skills, I suspect, are Wizard skills. And so far, CFAR plus the Sequences seldom seem to be enough to get people there.

I've learned some hard things. I've learned to prove theorems of nonstandard mathematics that defy my most basic logical intuitions, for example. I've learned to interpret ancient, bizarre, abstract Indian philosophy. I've learned to follow Blues dance like nobody's business. And I can't think of a single skill I've gained that simply could not be broken down into quick and easy bug patches, getting-my-mind-around-it-ness, and boatloads of small, consistent efforts.

So maybe I'm wrong, and most of the Wizard skills worth having are primitively slow and difficult to attain. After all, that's one theory that explains why I lack Beisutzukai-level mastery. There's got to be something Anna Salamon and Eliezer Yudkowsky share that I lack, and maybe this is it.

But you know what Anna and Eliezer definitely have that I don't? Practice. Years and years of practice. I heard the word "rationalist" outside of Cartesian philosophy for the first time just two years ago. So maybe while I've had most of the epiphanies I'm going to from Lesswrong's material, and while I've installed most of CFAR's bug patches, there's a third class of easily attainable skills I must gain before I can weave all of it together and become far stronger as a rationalist.

If this is true, it's very good news! It means that if I can looks at the Wizard skills I desire and break them down into the epiphanies and bug patches I already have, I may be able to ask myself, "What part of this puzzle is going to take small, consistent effort?" And I might well come up with a useful answer!

With a single exception, all of the skills I've gained directly from Eliezer since I've lived with him over the past year confirm this hypothesis. (He gave me one all-or-nothing epiphany in person, which was "fail more".) All of the others followed more or less the same pattern:

  1. He emphasized the importance of something I already basically had my head around, both abstractly in principle and concretely in practice.
  2. I decided to practice CONSTANT VIGILANCE for a single failure mode associated with lack of the skill.
  3. I noticed the failure several times over the course of days or weeks until I could predict when I was about to experience the failure mode.
  4. I practiced CONSTANT VIGILANCE for times when I could feel that the failure mode was about to happen.
  5. I tested out a few ways of responding to the feeling that the failure mode was about to happen, to find out what overcoming the problem might feel like.
  6. I let the results of those tests process for a little while.
  7. Often, I ran my observations by Eliezer to get his feedback.
  8. I composed a trigger-action plan (though usually not in writing) with the trigger "I notice I'm about to experience the failure mode if I don't do anything to stop it", and an action I expect to avert the failure.
  9. I practice the trigger-action until it feels like a background habit.
  10. I weave my understanding of the problem and its import into my practice.
Imagine a master rationalist does the exercise I described above, picking a Wizard skill and sorting its components into the other quadrants. And imagine she wants to teach that skill to me. She can say some things about what must be understood, hoping to cause the relevant quick but difficult epiphanies. She can give me some simple bug patches to install if quick and easy solutions are part of it. Then, for every slow but easy tortoise component, she could drastically speed up my skill acquisition by providing me with, or otherwise helping me uncover, the following information.

  1. What it feels like to notice the failure mode itself, or how to find out what it feels like.
  2. What it feels like to notice that the failure mode is about to happen, or some things it might feel like.
  3. What to do when I notice that feeling, or a few options for what to try.
A compilation of such advice on tortoises, especially if it were presented in a way that encouraged consistent check-ins and small efforts toward improvement, would be a new kind of rationality resource.

It would not, however, be unprecedented in other domains. Without even doing research, I am aware of books approximating this concept focusing on yogamindfulness, writing, and physics. I think we need one of these for the art of rationality.

Tuesday, August 12, 2014

Ways Nouns Verb Other Nouns

There's an incredibly important mnemonics exercise that I've somehow neglected to mention to anyone up to this point: Set a five minute timer and write down as many ways as possible for objects to interact with other objects. You might want to work with a particular example, such as "camera" and "watermelon". Or you might want to stick with ways people in particular can interact with things.

In mnemonics, you're constrained by how rigidly your brain insists on completing the usual pattern instead of doing something else. (It occurs to me that you could replace "mnemonics" with just about anything and preserve the truth value of the previous sentence. But it's especially clear-cut in mnemonics.) If you're trying to bind "camera" to "watermelon", it may be that the first thing that comes to mind is "camera takes a picture of the watermelon". It's natural to get stuck on that not-very-memorable image, going round and round with the query "camera watermelon?" and your brain's insistence upon the answer, "camera takes picture of watermelon". You say, "No brain, I need something else," and your brain is all, "Um, but that's what cameras do. How about... camera takes picture of watermelon?"

To reliably escape loops like that, it helps to have practiced the mental motion of trying out other possible interactions, and it helps to have a whole arsenal of them ready to go.

Here, I'll demonstrate. Camera and watermelon. Off the top of my head--really, I'm going to note the very first things that come to mind, like I would in real life:

  • Yes, the camera could take a picture of the watermelon, thanks brain, keep thinking. 
  • The camera could transform into the watermelon, or melt over it, or deform to surround it, or absorb it, or bounce off of it. (Those count as one because they're my standard, not-really-trying interaction collection.) 
  • The water melon could eat the camera. 
  • It could tackle the camera. 
  • The camera could wrap its strap around the watermelon and strangle it, or make a noose and hang it, or drag it along on a leash. (All things to do with the strap, so that's one.)
  • The camera and the watermelon could attempt to have sex, in which case they might spend the whole time looking for some configuration that would allow such a thing to happen despite their apparently incompatible anatomy. 
  • The watermelon could be fired out of the barrel of the camera. 
  • The watermelon could fall on the camera, crushing it and splattering its guts everywhere. 
  • The watermelon and the camera could tango, or waltz, or charleston, or do jumping jacks facing each other, or skip while holding hands, or race. (Those are all physical partner activities, so that's one.) 
  • The watermelon could spit seeds at the camera while the camera frantically dodges. 
  • The watermelon could vomit the camera. 
  • The camera could have a human-shaped body and a super power that lets it shoot watermelons out of its wrists the way Spider Man does with webs. (That was overly complicated, brain, but I appreciate the... whatever it is you just did to make that.)
  • The camera could shit the watermelon. 
  • The watermelon could give birth to the camera. 
  • The watermelon could roll over the camera, squishing it and collecting it up like Katamari Damacy.
  • The camera could make out with the watermelon.
  • The camera could play tag with the water melon.

Finding things like this is quick and easy once you're used to it. I couldn't type nearly fast enough to get these down as quickly as I thought of them. (To be clear, I'm trying to give you evidence of your own potential, not to show off.) I've been at this long enough that I didn't have to stop for breath to make that list, and it ended because I didn't want to waste your time or use up too many ideas you might have if you tried this exercise. The watermelon would be finding ways to sharpen the camera's mechanical parts into various weapons by the time I was actually done.

It's slow and effortful, though, if you try to do it in real life without having practiced. And it's essential that this become easy for you, if you're after order of magnitude improvements to your internal memory.

Binding is the foundation of all palace-style mnemonics. Once you have a basic two-place relationship that isn't the normal expected thing, you can just feed that to your inner simulator and it'll start filling in all kinds of unexpected, emotionally potent details all on its own as you let the story play out.* With only the expected relationship, you have to make a separate effort to insert every single little detail required to boost the memorability.

There's no way mnemonic techniques will work fast enough to actually be useful if every time you cast out for something besides the usual pattern, your net comes back empty. You'll be stuck with the ordinary, boring, expected pattern. And there's nothing memorable about that.

*Incidentally, the PAO system for number memorization is a systematized application of this principle. "PAO" stands for "Person, Action, Object". To each number between 0 and 99, you assign a person, and action, and an object. Suppose 23 is John Luc Picard sipping a cup of Earl Grey tea, 45 is Captain Jack Harkness fucking a pterodactyl, and 83 is Barney the dinosaur eating a cake. To memorize any six digit number, you have the person from the first two digits do the action from the second two digits to the object in the third two digits. And you end up with "234,583" being encoded as "John Luc Picard fucking a cake". Now when you feed your brain a question like, "What does that sound like?" you don't have to do any extra work to come up with a memorable answer. Your inner simulator has something way outside of any of its usual patterns, and just about anything it could possibly supply for "the sound of Picard fucking a cake" is going to be highly memorable.

________________________________________________________________________

In other news, I've recently started offering private lessons in mnemonics, and it's going swimmingly so far. If you want to get good at this stuff super fast, I don't know of a better way than to work with me for an hour. Besides maybe working with me for three hours. I'm charging $100 to $200 an hour depending on the goal. You don't have to live in the Bay Area, because we all live in the future. Email me at strohl89@gmail.com if you're interested.

Sunday, August 3, 2014

Explaining Effective Altruism to System 1

[This obviously borrows heavily from the ideas of Eliezer Yudkowsky. In particular, much of it recaps and expands on his talk at the Effective Altruism Retreat of 2014, though I suspect my own ideas fed into that talk anyway. There are also SPOILERS up to chapter 55 of Harry Potter and the Methods of Rationality.]

I donated to the Humane Society once. There was this charismatic twenty-something holding a clipboard, and I hadn't yet learned to walk past such people on the street. So I stood there and listened, while they told me about lonely puppies raised in tiny, dirty, wire cages; sick and shivering puppies deprived of proper veterinary care, affection, and adequate food and water; frightened puppies, abused and exploited for their market value.

I like puppies. They're fluffy and have great big eyes. They make cute little noises when I play tug of war with them. And it makes me very sad when I imagine them hurting. Clipboard Person told me I could rescue a puppy by donating just ten dollars a month to the Humane Society. So I did. I couldn't help myself.*

The Humane Society of the United States is a nonprofit organization working to reduce animal suffering in the US. The Machine Intelligence Research Institute, another nonprofit, is working to ensure prosperity for the entire future of the humanity. HSUS stops puppy mills and factory farming from hurting animals. MIRI stops artificial general intelligence from destroying the world.**

In 2012, HSUS supporters outdid MIRI supporters one hundred fold in donations.***

Look at this popup.


In this popup--the first thing I see when I visit the HSUS website--I'm told I can be a hero. I'm shown these pink-pawed kissable baby dogs in the arms of their new loving owner [this might actually be a volunteer or police officer or something, whatever], and I'm implicitly led to imagine that if I don't donate, those puppies will suffer and die horribly. If I don't act, terrible things will happen to creatures I automatically care about, and I am personally responsible. This message is concrete, immediate, and heart-wrenching.

Animal advocacy activists have to do approximately zero work to speak to potential donors in the Language of System 1. Which means System 1 automatically gets the message. And guess who's primarily in charge of motivating such actions as pulling out your checkbook. (Hint: It's not System 2.)

This just isn't fair.
__________________________________________________

Wouldn't it be great if we could grock our support of such strange and abstract EA organizations as the Machine Intelligence Research Institute on the same automatic emotional level that we grock animal advocacy?

I think we can. It takes work. But I think it's possible, and I think I've got some ideas about how to do it. The basic idea is to translate "I should help MIRI" into a message that is similarly concrete, immediate, and heart-wrenching.

So what is the problem, exactly? Why is MIRI so hard for System 1 to understand?

I think the main problem is that the people MIRI's trying to save are difficult to empathize with. If my best friend were dying in front of me and there were a button beside him labeled "save best friend's life", I'd feel motivated to push it even if I had no idea how it worked. But even if I could give S1 an excellent understanding of how MIRI plans to accomplish its goal of saving everyone, it wouldn't change much unless my emotions were behind the goal itself.

VERY IMPORTANT: Do not employ these sorts of methods to get your emotions behind the goal before System 2 is quite certain it's a good idea. Otherwise, you might end up giving all your money to the Humane Society or something. 

Why are most of the people MIRI wants to save so hard to empathize with? I think my lack of empathy is overdetermined.

  1. There are too many of them (something on the order of 10^58th), and System 1 can't get a handle on that. No matter how good S2 is at math, S1 thinks huge numbers aren't real, so huge numbers of people aren't real either.
  2. Most of them are really far away. Not only are they not right in front of me, but most of them aren't even on my planet, or in my galaxy. S1 is inclined to care only about the people in my immediate vicinity, and when I care about people who are far away, there's generally something else connecting us. S2 thinks this is bollocks, but isn't directly in charge of my emotions.
  3. They're also distant in time. Though S2 knows there's no sense in which people of the far distant future are any less real than the people who exist right now, S1 doesn't actually believe in them.
  4. They're very strange, and therefore hard to imagine concretely. Day-to-day life will change a lot over time, as it always has. People probably won't even be made of proteins for very much longer. The people I'm trying to empathize with are patterns of computations, and S1 completely fails to register that that's really what people are already. S1 doesn't know how such a thing would look, feel, taste, smell, or sound. It has no satisfying stories to tell itself about them.****
  5. I don't imagine myself as living in the future, and S1 is indifferent about things that don't directly involve me. [I feel this so strongly that the first version of 5 said "I don't live in the future," and it took several re-readings before I noticed how ridiculous that was.]
Note that most of these obstacles to S1 understanding apply to world poverty reduction and animal altruism as well. People in the developing world are numerous, distant, and tend to live lives very different from my own. This is true of most animals as well. The population of the far distant future is simply an extreme case.

So those are some S1 weaknesses. But S1 also has strengths to bring to bear on this problem. It's great at feeling empathy and motivation under certain circumstances.

  1. S1 can model individuals. It can imagine with solid emotional impact the experience of one single other person.
  2. It can handle things that are happening nearby.
  3. It can handle things that are happening right now.
  4. It feels lots of strong emotions about its "tribe", the people in its natural circle of concern (my family, friends, school, etc.)
  5. It cares especially about people with familiar experiences it can easily imagine in vivid sensory detail.
  6. It loves stories.
  7. It gets a better grip on ideas when things are exaggerated.
  8. It's self-centered, in the sense of caring much more about things that involve me directly.
To translate "I should help MIRI" (and relevant associated ideas) into the Language of System 1, you'd need to craft a message that plays to S1's strengths while making up for its weaknesses.

I did this myself, so I'll try to walk you through the process I used.
__________________________________________________

[HPMOR SPOILERS BEGIN HERE]

I started with the central idea and the associated emotion, which I decided is "saving people" or "protecting people". I searched my association network near "saving people" for something concrete I could modify and build on.

I quickly came across "Harry James Potter-Evans-Verris in Azkaban", which is further associated with "patronus", "dementors", and "the woman who cried out for his help when Professor Quirrell's quieting charm was gone". Yes, THERE is the emotion I want to work with. Now I'm getting somewhere.

Now to encode the relevant information in a modification of this story.

In my story, I'm the one walking the halls of Azkaban, rather than Harry. There are too many people in the future, so I'll focus on one person in one cell. And it will be someone close to me, a particular person I know well and care for deeply. One of my best friends.

My version of Azkaban will extend for a few miles in all directions--not far enough to truly represent reality, but just far enough to give me the emotional impression of "really far". The future doesn't feel real, so I'll populate my Azkaban with a bunch of those future people, and my representations of them exist right now in this brick-and-mortar building around me. Some of them are strange in maybe implausible but fairly specific ways--they're aquatic, or silicon crystals, or super-intelligent shades of the color blue, whatever. They're people, and the woman beside me is familiar.

The central message is "save them"--save them from what? From suffering, from death, and from nonexistance. Conveniently, canon dementors already represent those things.

And what's the "patronus"? That's easy too. In my mind, "effective altruism" is the muggle term for "expecto patronum".

Finally, with a broad outline in place, I begin the story and run my simulation in full detail.
__________________________________________________

I imagine Azkaban. Imagine myself there. A gray prison with cold, damp walls. There are countless cells--I'm not sure how many, but there are at least a dozen in this hall, and a dozen halls on this floor, and a dozen floors in this wing alone. And in every single cell is a person.

There could be animals here, too, if I wanted. Puppies, even. Because this isn't a prison where bad people are sent to be punished and kept from hurting others. This is a much more terrible place, where the innocent go, just for having been born too early, for having lived before anyone knew how to save them from death.

I imagine that I'm walking down the hallway, lined in cells on either side. I hear the eerie clicking of my shoes against the stone floor. Feel the fear of distant dementors washing over me. And as I walk by, on my left, a single person cries out.

I look through the bars, and I can see her. My friend, lying in shadow, whose weak voice I now recognize. She is old and wasting away. She is malnourished, and sickly, and she will die soon. The dementors have fed on her all these years, and she is calling to me with her last breaths. Just as everyone in Azkaban would do if they knew that I'm here, if they knew they were not alone.

I live in a time when things like this still happen to people. To everyone. Some of us live good lives for a while. Others exist always in quiet desperation. But eventually, the dementors become too much for us, and we waste away. It's not pretty. Our bodies fail, our minds fail. We drown in our own phlegm after forgetting the names of our children.

I imagine my friend crying in that cell, wishing to be healthy and happy again, to live one more day. That is just one chosen from every other prisoner in the present and the vast future, who will die if I just watch, doing nothing. Only I can help. I am here, so she is mine to protect. Everyone in Azkaban is mine to protect. They have nobody else. And if I could be everywhere in Azkaban, the cries for help would echo off of every wall.

But it doesn't have to be like this. Azkaban is an evil place, and I do not have to stand for it. Death is not invincible. I can think, and choose my path, and act.

What is Effective Altruism, in the limit? It is healing every wound. Not praying that someone else will do it, but reaching out myself, with everything I have, to destroy every dementor. To tear down these walls. To carry every prisoner to safety, restore them to health, and ensure no one has to suffer and die like this ever again.

It is seeing this one suffering woman who needs my help and choosing to protect her from the darkness--and knowing that she is every person in the future extended before me.

Harry cast his protronus to protect the woman, in the original story. But then he stopped. Because it wasn't time. He didn't have the power. Like the altruists of two hundred years ago, he wasn't ready. There was only so much he could do.

But now the time has come. Today is the pivotal moment in all of human history when I have the power to intervene effectively. I can cast my patronus, and never let it stop until every dementor is destroyed, and every person has been protected.

A dementor approaches from one end of the hall, seeking its prey. I feel it, radiating emptiness and despair, and a woman wimpers, "help me, please".  From the other end of the hall, others who share my goals race in to help me me. They gather before the dementor. Leaders of EA organizations, others who have dedicated their lives to existential risk reduction. They draw their wands and prepare for battle.

I look at my friend in her cell, her eyes pleading desperately, as I draw my wand and move into the beginning stance for the patronus charm. "I will save you," I say to her.

Moving my thumb and forefinger just the right distance apart, I imagine her smiling, revived, prospering.

I flick my wand once, and promise she will be free. Twice, and promise to free all the prisoners in this wing. Thrice, and promise to free every prisoner in Azkaban. Four times, and promise no dementor will hurt another living person ever again.

We level our wands straight at the dementor, brandishing them to drive away the darkness. And with victory in our voices, together we shout,

"EFFECTIVE ALTRUISM!"

The thought explodes from my wand, blazing with the brilliance of the MOST good. It joins with the patronuses of all the other effective altruists. The light burns down the hallway, freeing every prisoner it passes from despair and death. It burns through the walls, and they crumble. It burns in every direction, and one after another, the dementors are reduced to little piles of ashen cloth. Healing the wounds in the world. The light continues to grow, enveloping the patch of pebbles that once was Azkaban, our whole world, our galaxy, our future light cone.

Saving our people. Everyone. Everywhere. Forever.

"Effective altruism" is the muggle term for "expecto patronum". It needn't be merely an abstract idea we force ourselves to act on while our emotions lag behind. It can be our battle cry against death.
__________________________________________________________________


*I'd never heard of effective altruism then, of course. In fact, I didn't consider myself an altruist of any sort. I'm not sure I'd donated to anything at that point besides maybe SETI. The HSUS pitch was just really good.
**"Converting the reachable universe into quality adjusted life years is also cute." --Eliezer Yudkowsky, Effective Altruism Summit 2013
***In their 990s, HSUS reported $112,833,027 in grants and contributions, while MIRI reported $1,066,055.
****The Tale of the Far Future Person: "Once upon a time, there will have been an entity. The entity will have been alive and sentient. It will have had various experiences and values. Never dying, it will have satisfied its preferences ever after. The end."