Saturday, December 15, 2012

Sunday, December 9, 2012

The Story of My Journey into the Secular Community

I was raised Catholic.  My mother has been a devout (liberal) Catholic as long as I've known her.  Dad's been an atheist most of his life, but I guess my parents agreed to let Mom raise me and my brothers in the Church.

When I was little, I loved being Catholic.  I went to a Catholic school in the Midwest, where religion classes were mandatory beginning in preschool.  I guess "age of reason" was a pretty accurate description in my case, because by second grade I was very serious about understanding theology.  I considered preparing for first communion a grave responsibility.  It was, after all, the first sacrament I'd take of my own choice.  I was dedicated to understanding transubstantiation, why it matters, and what sacraments are really all about.  I remember struggling with the idea of symbols; I was never satisfied by the explanations of them my mother and teachers would give.

I was told that symbols are "outward signs of inward grace", and that they are there to help our small mortal minds comprehend God's infinite love and wisdom at least enough to let ourselves be transformed by them.  I was skeptical, even then.  I was worried that symbols might actually be distractions, or, worse yet, artificial barriers designed by the Church to control my relationship with God.  Why are priests the only ones who can ask God to turn bread into the Body of Christ? I wondered. If God is infinitely wise, what does He care for the infinitesimal wisdom accumulated through seminary?  I felt fairly certain that the only reason priests could serve as special conduits of God's grace was that their hearts were pure and fully devoted to Him when they made the request.  It seemed implausible that the sacrament of Ordination, really just a collection of very fancy symbols, could grant you magic powers in virtue of its role within the thoroughly human structure of the Church.

I called bullshit.  I decided to become a priest.  "The Church doesn't let girls become priests," my second grade teacher informed me.  I told her I didn't really intend to ask permission.

My teachers had no idea what to do with me.  They weren't trained in theology.  We didn't have that sort of funding.  Besides, no one expected that an eight-year-old might singlehandedly attempt the Protestant Reformation.  But I knew nothing of the other sects of Christianity, and I was comfortable with my personal interpretation of Catholicism, so I took my first communion happily in a white dress like all the other little girls, and that was that.

I encountered even greater challenges to my faith in third grade.  One day, while sitting with my classmates in a circle for story time, my teacher said something deeply puzzling.  I don't recall what story she was reading to us or what led her to say this, but she said, "Of course, I'm sure all your parents are good Catholics, or at least Christians."  I raised my hand.

"Actually," I corrected her, "my dad's an atheist."

She gasped.  Then, with shock on her face, she responded, "Oh, I'm so sorry!"  As I write this, it occurs to me for the first time that she probably meant to apologize for expressing offhandedly to a fragile group of children her rude presumption.  I've always thought, as I did when it happened, that she felt sorry for me because I was in the awful position of having an atheist for a father.

I didn't understand her concern.  I'd never talked to either of my parents about Dad's atheism, or about whether there are other people who aren't Catholics.  I didn't know it was supposed to be a bad thing.  I just considered it one of the many ways in which he differed from the other people I knew, like his being a biology teacher or keeping lizards as pets.

I talked to Dad about this incident.  I don't remember the content of that conversation, but I know it resulted in his recommendation that I read The Demon Haunted World by Carl Sagan.  He lent me a copy.  Over the next year I read that and several other Sagan books.  Needless to say, I became even more of a nuisance during religion class.

I think I was more upset that the adults in my life were satisfied with ignorance when they understood my questions and criticisms but couldn't answer than I was by the discovery that God isn't real.  It caused me to lose respect for them.  I even lost respect for my mother, to some extent.

From mid fourth grade on, school was a horribly painful experience for me.  I kept pretending to be Catholic.  What skepticism I couldn't contain during class and my feeling that no one else cared about what was true created enough of a rift between me and my peers, my mother, and my teachers that I was not about to give up plausible denyability, thereby formalizing my isolation and rendering it impenetrable.  I became deeply depressed.  I refused to turn in homework or study for tests.  I paid as little attention to class as possible, spending all of my time absorbed in science fiction, fantasy, and pop physics books.  I remember telling my mother that I wanted to drop out of school forever, that I'd make a living by playing my saxophone on street corners.  Fortunately, I discovered early in seventh grade that I could get straight A's with minimal effort, thereby keeping my teachers and my mom off my back, at least as long as I stayed quiet.

But I couldn't stay quiet in religion class, which, by this point, was being taught by a priest.  His name was Fr. McCarthy.  Fr. McCarthy was The Enemy.  Not only was he a particularly conservative Catholic who'd apparently slept through Vatican Two, but he was the most wretched, underhanded debater I've encountered to this day.  He knew I disagreed with everything he taught, and he'd purposefully pick fights with me so the other students could watch him trample the heathen.

He never trampled me fairly, though, even when I was in fact wrong.  True, in eighth grade I was already a more advanced philosopher and theologian than he was, but I was still a kid and had most of my cognitive developing yet to do.  I was quite a bit more wrong then.  He often could have won fairly.  But he didn't.  Instead, he would use insults, snide and disparaging remarks, and often outright lies to undermine my credibility in the eyes of my classmates.  He could win merely by exploiting his authority.  Occasionally, I'd even catch him misrepresenting or outright misquoting scripture, the Catechism, or Aquinas.  But I'd catch him, of course, well after the fact while researching his more dubious claims.  By then it was always too late.

The school was very small--seventeen people in my graduating class--so everyone in every year got a play-by-play of these skirmishes.  Obviously, this did not help my social situation.  I was unbearably lonely.  I tried to defend myself by being arrogant, by thinking that no one was worthy of my friendship anyway, and that everyone else was, after all, boring.  It was a terribly dark saga.

One day during Mass, there was only one line for Communion.  Usually, there were two.  But this day, taking Eucharist from Fr. McCarthy was unavoidable.  I stood before him, holding out my hands to receive the now empty sacrament.  "Body of Christ," he said to me, raising the stale wafer in offering.

"Amen," I responded quietly.  But his hand didn't lower immediately.  He held still, staring at me quizzically.  There was a sickeningly long moment of tension, and then, quietly so that only I could hear, he said,


I was mortified.  Frozen.  I don't remember how I responded, but I know that soon after I ran from the church and hid from my teachers behind a bush, crying.  At some point I told my mom, who told the (far more liberal) main priest of our parish, who was furious.

I hear that Fr. McCarthy was harshly reprimanded.  But I would like to thank him.  If I'd not felt that moment of intense discomfort at my years of deception, I don't know how long it might have taken me to learn to be true to myself.  I don't know that I'd ever have found the courage to stand up, to speak out, and to be counted.  I certainly would not have found myself announcing to every other non-Christian in my brand new public school junior year, "You are not alone."

I was tired of hiding.  I wasn't any good at it anyway.  Mine had always been an awfully noisy closet, and people were listening.  I cared about the truth, and I was angry at the world for systematically neglecting it.  So I resolved one morning to give it a voice.

The school secretary was in charge of making announcements over the intercom at the beginning of each day, after which she led the school in the Pledge of Allegiance.  That morning--a Friday, I think--I skipped class for the first time.  I went to the secretary's office, introduced myself, and requested the honor of leading the Pledge.  She seemed delighted that a student was taking interest, and obliged me.

As she read the announcements for Friday morning, my heart pounded.  My hands trembled.  I was worried I might not be able to speak.  Then she handed me the intercom, and I became calm, focused, and clear.  I spoke:

I pledge allegiance to the flag of the United States of America, and to the Republic for which it stands, one Nation indivisible, with liberty and justice for all.

The static of a silent intercom hung in the air for several seconds.  I was trying to hand back the receiver, but no one in the office--not the assistants, not the principal, and not the secretary--was moving.  They all just stood petrified, staring at me, their mouths hanging open.  I turned off the intercom myself, and walked, confidently but as quickly as possible, to my first period class.

A few minutes later, I was called to the principal's office.  The secretary was sitting in a corner.  She'd clearly been crying.  The principal folded his arms and gave me a Very Stern Talking To.  "Do you understand the significance of what you've done?"

I felt awful.  I'd never wanted to hurt anyone.  I'd just wanted to defend the First Amendment and to try telling everyone the truth on for size.  I certainly didn't want to make the friendly school secretary cry.  I apologized, but I did insist that including God in the pledge every morning is unconstitutional, and that it marginalizes people who don't believe in God.  He told me that I'd offended many more people than that, because I was probably the only non-Christian in the school.  I thought he was probably right.  I left his office in tears.

The truth is, I didn't understand the full significance of what I'd done.  And neither did he.

But someone did.  I don't know his name, but he was a small mousey freshman whose voice I'd never heard before, and he came to me as I was rummaging in my locker.  "Hey," he said.  His voice was shaking, and he spoke so quietly it was nearly a whisper.  "That was really cool, what you did.  I've always been too scared to tell anybody I'm an atheist.  I thought I was the only one.  It means a lot to me, what you said.  Or didn't say, I guess.  Thank you."  He ran off before I could even say you're welcome.

He wasn't the only one.  More people thanked me that day.  And more the next week.  And the next month.  And they weren't just telling me.  I overheard people talking constitutional philosophy in the halls, saying it's not fair to Hindus or Buddhists either, and saying they'd just found out some of their friends don't believe in God.  A few weeks later I found out someone had pulled the same stunt in a neighboring town, and then, to my great astonishment, that it had even happened at my old Catholic school.

I thought coming out as an atheist was mostly just about me.  I was wrong.

By coming out publicly, I did not further ostracize myself.  There was a lot of retaliation from those who felt threatened by the challenge to Christian authority, but I was not standing up to them alone.  None of us was.  In a matter of seconds, I founded a community that had been waiting the whole time and needed only to be given a voice.  They all just needed to see one person stand up and say, "It's ok to be an atheist."

"Get your ass out of that closet!"

James Corft talks to closeted atheists through the We Are Atheism campaign.

It's definitely time for me to make one of these.  You have my word it'll happen as soon as I run into a decent camera.  Stay tuned.

By the way, I'm about 90% sure that James recorded this at Skepticon 5.  WAA had a table, and they were recording interviews... in a tiny closet.  This probably amuses me more than is strictly warranted.

Sunday, November 18, 2012

Hexaflexagation: Holiday Edition

Step one: learn to make a six sided snow flake.  Step two: learn to make a hexaflexagon.  Step three:  paper kaleidosnowflake.

Friday, November 9, 2012

Hexaflexagation Visualization

If you've never heard of a hexaflexagon, watch this.   In brief, hexaflexagons are cool because they have really weird geometry.  They have too many sides.  If you've never seen a hexahexaflexagon, watch this.  Hexahexaflexagons have even more too many sides.  If you find yourself hexiflexagonally inclined, make one yourself and play with it for a while.  Then, once you've accidentally sunk way too much time into trying to understand your new toy and you're really frustrated that the sides keep disappearing and new ones keep appearing out of nowhere, watch this:

For the more set-theoretically inclined:

Node set: {1o, 1*, 1#, 2o, 2*, 2#, 3o, 3*, 3#, 4o, 4*, 5o, 5*, 6o, 6*}

("o" stands for "circle" (a circle drawn around the center of the hexagon), "*" is "star" (which looks more like a snow flake), and "#" is "box", which is actually a small hexagon drawn around the center.)

"Open" for top side: {(1*, 5*), (1*, 3o), (1o, 3o), (2*, 1*), (2*, 4*), (2o, 1*), (3*, 2*), (3o, 6o), (3o, 2*), (4*, 3*), (5*, 20), (6*, 1o)}

"Open" for bottom side: {(1#, 6o), (1#, 2#), (1o, 2#), (2#, 3#), (2#, 5o), (2o, 3#), (3*, 1#), (3#, 4o), (4o, 2o), (5o, 1o), (6o, 3*)}

"Flip" for top side: {(1*, 2#), (1o, 6o), (2*, 3#), (2o, 5o), (3*, 4o), (3o, 1#), (4*, 2o), (5*, 1o), (6*, 3*)}

"Flip" for bottom side: {(1#, 3o), (1o, 5*), (2#, 1*), (2o, 4*), (3*, 6*), (3#, 2*), (4o, 3*), (5o, 2o), (6o, 1o)}

But it's way more fun on a balloon, right? 

Friday, October 5, 2012

The Trouble with Quibbles

I'd like to call your attention to a wonderful little essay by Jesse Galef that Hemant posted at The Friendly Atheist today.  Jesse reminds us that spending too much time on the internal tensions and dramas of a movement can undermine its overall mission.  He's talking specifically about the secular movement, but what he says goes for any situation in which many free-thinking people with strong critical faculties try to accomplish something together.

I thought I'd tack on a technique I employ quite frequently to ensure that my critical comments on blogs and forums are actually instrumentally rational with respect to reason-mongering. 

We all know what it feels like when someone is wrong on the internet, especially about something that matters to us.  Even if we think they've got it mostly right, there's a gut reaction prompting us to call out errors and present perfected versions of arguments.  But it's not to our personal benefit, nor to the benefit of whatever cause we support, to require perfection in every comment, post, and article we read.  Often, drawing attention to relatively minor errors or disagreements means drawing attention away from the main point.  If the main point is one we support, and if the author more or less accomplishes her goal of making it well and spreading the news, it probably makes more sense to point out the best parts rather than the worst.

The irrational tendency to pounce on the mistakes of others despite one's own best interest is a side effect of the extremely useful family of heuristics we employ to maximize rationality, a family comprising such skills as skepticism, hypothetical reasoning, and sensitivity to common fallacies in arguments.  Applying these tools to the claims of others protects us from believing willy-nilly whatever we happen to read, and encourages the adoption of only the most strongly justified beliefs.  They're important skills, and without them the secular movement wouldn't have much going for it.  But for every heuristic, there is a bias.

Quibble addiction is a cognitive bias, one we can learn to counteract as we would any other obstacle to lucid thought.

Just to be clear, I'm not suggesting that criticism itself is bad.  Obviously, it's tremendously useful.  It's essential to practice skeptically evaluating arguments from Your Side just as you would those from The Other Side.  And when it isn't trivial, criticism is an invaluable way to improve on your allies' message.  I just want to highlight that not all criticism is in fact productive.  Criticism is a tool for accomplishing other goals; when it functions as its own end, we risk losing sight of our deeper values.

So here's a start on how to kick the quibble habit.  Whenever I feel the urge to analyze and expose the shortcomings of an author I basically agree with, I ask myself the following questions.  They often reveal that I'm indulging my quibble addiction.  Subsequently, I'm able to devote my limited resources to something more important -- at a minimum, to someone who's wronger on the Internet.

  • What goal did the author have in mind when she wrote this in the first place?
  • Do I support that goal?
  • Does the article/post/comment still work despite the problems I see?
  • Is there something more effective I could do with the time it would take me to point out the problems?
  • Are my criticisms best made in public, or in a private message?

Sunday, September 30, 2012

How to want to want to change your mind

Back in February, Julia Galef posted a wonderful video full of tips on how to want to change your mind.  When I first watched it, I was excited to gain so many useful tools to share with others, but I must admit I harbored some doubts as to whether I really needed them myself.  I've put so much work, I thought, into learning to be rational.  Surely I already have such a basic skill down pat.

Not so!  Over the past several months, I've paid much closer attention to the phenomenology of disagreement and what it's like from the inside to change my mind.  I've found that even after having been raised by scientists, earning a degree in philosophy, and putting Julia's tricks to frequent use, it really is incredibly difficult.

Along the way, I adopted the following mantra.  Just because I'm more rational than those around me does not mean I am in fact rational.

It's a little tricky to discover instances of my own irrationality for a stubbornly obvious reason: If I were fully aware of being wrong, I'd already be right!  But it's not impossible once you get used to trying.  It's just a matter of recognizing what self-deception feels like.  At first, though, it's easiest to catch irrationality in retrospect, so here's a little exercise that taught me to be on the lookout for resistance to learning the truth.

Exercise One

Next time you change your mind about something, make a study of what led you to do it.   
  1. Get a piece of paper and fold it in half.  Great big in the top left, write down an estimation of how certain you were about your belief before you started the process of changing your mind. Beneath that, write out, in as much detail as possible, why you held the false belief in the first place.  If there were several pieces of evidence, make a list.   
  2. On the other side, write down all the evidence you collected or considered that ultimately led you to abandon your former hypothesis.  Circle the one that finally did the trick.   
  3. Then, distance yourself from the situation.  Pretend that it’s is a story about someone else entirely.  Consider each piece of weakening evidence individually, and estimate how much less certain it would make a fully rational Bayesian reasoner on its own and in conjunction with the other pieces of evidence you already had when you started considering this new one.  If you want to be really fancy about it, plug it into Bayes theorem and run the numbers.  Write those estimations in a column.   
  4. Finally, in another column, estimate how much each piece of evidence really did decrease your certainty of your false belief, and compare those numbers to those in the first column.

Now, perhaps you’re a whole lot more rational than I am.  But here’s what I find almost every time.  What actually happens is that my certainty barely changes at all until the final piece of evidence, even though the Bayesian reasoner’s certainty about the false hypothesis falls way below 50% long before that.  

This is what it means to cling to a belief, and it's all the more difficult to overcome in the course of a debate.  Even the most rational among us have human brains full of cognitive biases; defending yourself against them takes serious effort, no matter how far you've come as a rationalist.

But you don't have to take my word for it!  Go do science to it.  I'll see you next time.  ;-)

Tuesday, September 18, 2012

Science is better with Bayes.

My goal here is to explain how to approach science in every-day life through Bayes' theorem.  I promise it'll be fun.

(Made you look.)
One of the (several) problems with falsificationism (Popper's approach to science I laid out in a previous post) is that it doesn't give a useful account of degrees of certainty.  It encourages this idea that either you know a thing is true, you know it's false, or you're completely in the dark about it and can't make any rational decisions based on it.  In reality, if you're 90% certain about something, you should mostly act as though it's true, but give yourself a little wiggle room in case you turn out to be wrong.  We're almost never 100% certain about things, and that's perfectly fine.  We can still do good science and make rational decisions while working with probabilities, especially if we take a little advice from Bayes.

Remember back to when you were a little kid and you were just starting to doubt the existence of the tooth fairy.  It was a difficult question, because if there's no tooth fairy then your parents are liars.  And that's bad.  But you can't shake the feeling that this tooth fairy business doesn't quite match up with your understanding of the way the world works.  So you say to the world, "Stand back.  I'm going to try science."

You start with a question.  You want to know how it is that money appears under your pillow whenever you lose a tooth.  The theory you want to test is that the tooth fairy flies into your room, carefully reaches under your pillow, takes the tooth, and leaves money.  So your theory seems to predict that you ought to be able to catch her on camera.  Your test consists of leaving your freshly liberated tooth under your pillow, pointing your webcam at your bed, setting it to record all night, going to sleep, and watching the video the next day.  Your hypothesis is that there will be a fairy somewhere in the video.  Good old capital "S", capital "M" Scientific Method, as usual.

Suppose you get exactly the result you hypothesized.  Sure enough, three hours into the video you see a light from outside, the window opens, and a small shiny woman with wings floats in.  She reaches under your pillow for the tooth, replaces it with money, and then leaves.  The intuitive response to this result is to become wholeheartedly certain that the tooth fairy exists.  Popper's falsificationism tells us it's going to take a whole lot more tests before we should be really certain that the tooth fairy exists, because even though this is a legitimately scientific theory, confirmation isn't nearly as strong as falsification.  But it doesn't tell us how sure we *should* be.  Just that we shouldn't be completely sure.  Should we be 20% sure?  50% sure?  90% sure?

How we should act when we're 20% sure vs. 90% sure is very different indeed.  If you're only 20% sure the tooth fairy exists even though your parents insist she does, you should probably have an important talk with them about honesty, whether they themselves actually believe in her, and maybe skepticism if they really do.  If you're 90% sure, you might want to set up your computer to sound an alarm when it registers a certain amount of light so you can wake up and ask her to let you visit fairyland.  So how do you know how much certainty is rational?

Have no fear.  Bayes is here.

First, you're going to have to guesstimate your certainty about a few things.  You should definitely do this before you even run the experiment.  If you want to be really hardcore about it, convince other people, and generally run things with the rigor of a professional scientist, guesstimating isn't quite going to do the trick.  But every-day science like this is necessarily messy, and that doesn't mean you shouldn't do it.  It's perfectly fine and useful to be somewhere in the ballpark of correct.  So here are the numbers you need.
  • How certain are you that you really will catch her on film if she exists?  You reason that she probably is visible.  Otherwise she wouldn't have to come at night.  And fairies are supposed to glow or something, right?  You can't be invisible if you glow.  On the other hand, you don't really know how magic things interact with the rest of the world, so maybe she's like a vampire and simply can't be caught on film.  Let's call it 80% certainty, or 0.8.
  • How certain are you about the existence of the tooth fairy in the first place, before the experiment?  Since you were definitely becoming a tooth fairy doubter, but still thought it was pretty up-in-the-air, you figure you were about 40% certain that there's a tooth fairy.  You can express that as the decimal 0.4.
  • How likely is it that you'll see a fairy on the recording even if the tooth fairy doesn't actually exist?  It seems really unlikely.  But you can imagine other things that would cause this.  You mentioned to your older brother earlier that you were doubting the tooth fairy, so maybe he'll find out about your plan and play a prank with his film school buddies.  Or maybe there will be some fluke that causes damage to the file so it looks like there's a glowy person shaped thing in the recording that really is only in the recording.  So it's imaginable, but unlikely.  Let's say 5% sure something like that could happen.  0.05.
  • Finally, how likely is it that there's no tooth fairy?  Well this one's easy.  You already decided you're 40% sure there's a tooth fairy, so you must be 60% sure there isn't one.  0.6.
Bayes' theorem is all about finding out how much the evidence should change your beliefs, and whether it should change them at all.  It weighs all those factors we just estimated against each other and comes up with a degree of certainty that actually makes sense when you put them together.  Human brains are really bad at weighing probabilities rationally.  They just aren't built to do it.  But that's ok, because we have powerful statistical tools like this to help us out--provided we know how to use them.

If you want to know the nitty gritties of what's really going on inside Bayes theorem, check out Eliezer Yudkowsky's "excruciatingly gentle introduction to Bayes' theorem".  He's already got that covered (beautifully).  I just want to show you how it ends up working in real life.  So let's run the numbers.

We're looking for the probability that there's a tooth fairy after accounting for having (apparently) caught her on camera.  That's P(A|B), read "probability of A given B", where A is "there's a tooth fairy" and B is "she's in the recording", so "probability that there's a tooth fairy given that she's in the recording". 

In the numerator, we start with P(B|A), which is how likely it is that we really will see her on camera if she exists--probability "she's in the recording" given "there's a tooth fairy".  And that's 0.8.  Next, we multiply that by how sure we were that there's a tooth fairy before we caught her on film, simply probability "there's a tooth fairy".  And that's 0.4, for a total of 0.32 on top.

For the denominator, we start with a value we already have.  "P(B|A) P(A)" is what we just worked out to be 0.32.  So that's on one side of the addition sign.  Next, we want the probability that we'd see the tooth fairy in the recording even if the tooth fairy didn't actually exist.  The squiggly ~ symbol means "not"; P(B|~A) is probability "she's in the recording" given "she doesn't exist".  And that's 0.05.  Then we multiply that by P(~A), the probability that there isn't a tooth fairy, which is 0.6, for a total of 0.03 on the other side of the addition sign.  Add that up, and it's 0.35 on the bottom.

Finally, divide the top by the bottom: 0.32 divided by 0.35 equals 0.914ish.  What does that mean?  It means that if you started out thinking it's a bit less likely that there's a tooth fairy then that there isn't one, and then you caught her on camera, you should change your beliefs so that you're just a little over 90% certain that there's a tooth fairy.

In other words, you're growing up into an excellent rationalist who just made a groundbreaking discovery.  Go show the world your tooth fairy video, and see about having tea with the faeries.

Everything's better with science, and science is better with Bayes.


Problem Set: No, really, run the numbers.

1) Your power is out. It's storming. Use Bayes' theorem to decide how sure you are that a line is down.

2) A person you're attracted to smiles at you. Are they into you too?

3) (For this one, intuit the answer first. Make your best guess before applying the theorem, and WRITE IT DOWN. It's ok if you're way off. Just about all of us are. That's the point. Human brains aren't built for this kind of problem. I just don't want you falling prey to hindsight bias.) 1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?

Monday, September 17, 2012

"Science as Falsification" by Karl Popper: a simple English rendition (with a bit of artistic license)

Just to be clear, I'm not endorsing anything the authors is saying.  I'm just trying to make a paper that was highly influential in academia accessible to everybody else too.  The original paper of which the following is a rendition was originally published in 1963 in Conjectures and Refutations.  You can read the original version here.

Karl Popper, possibly in need of some simple English.
For the past year or so, I've been worried about the question, "What makes a theory count as scientific?"  I'm not worried about what makes something true or acceptable, just what makes it scientific as opposed to unscientific.  Science often gets things wrong, and people often stumble on things that are right without the help of science, so this can't be just about truth. 

Lots of people think that what makes something count as science is the fact that it came from observation and testing.  But I don't buy that.  Plenty of stuff that doesn't count as science is all about observation.  People believe in astrology, for instance, because they observe that astrologers make predictions that turn out to be true.  So why isn't astrology science?  How is the theory of astrology different from, say, Einstein's theory of general relativity?

The difference is that Einstein's theory might turn out to be wrong, and if it is, we'll eventually know.  We'll know because one day we'll make observations about the world that aren't in line with his theory.  What makes theories like Astrology, Freudian analysis, and other sorts of pseudo-science unscientific is that they can explain everything.  Usually, when we see that a theory is confirmed over and over again, we believe in it even more.  But if there's no way at all, even in principle, to make an observation that isn't in line with the theory, then all those confirmations don't actually mean anything.  Theories like that would be in line with all the same observations even if the theories were false--so if the theory is false, there's no way to find that out.

General relativity, evolution, Newtonian mechanics, and Mendelian genetics are all scientific theories not because there's lots of evidence confirming them, but because they make falsifiable predictions.  They predict certain things about the world, and the predictions are risky because we can check to see if the world really is that way.  If the world doesn't turn out to be the way the theory predicts, then we know the theory is false.  For pseudo-science, we get all the same predictions whether the theory is true or not.  There's no observation we could make to find out whether the theory's false.  Unscientific theories are unfalsifiable, unable to be shown false.

Observations that support a theory only really count as support if the theory makes risky predictions.  If a theory is scientific, you should be able to make a test so that if you get one result, you can continue believing the theory just as much as you did before--but if you get another result, you have to conclude that the theory is false.  Pseudoscience doesn't let you make these kinds of tests, because there's never any result you could possibly get that would make you change your mind and stop believing the theory.

Sometimes people have theories that really are testable, but when the test results don't come out the way they want, they either find some excuse to throw those results away, or they change their theory to match the results so it looks like they were right all along.  That's not science either, because it's impossible to find out that the theory is false when you do things that way, too.

This philosophy of science is called falsificationism, and I made it because draws it a line between what is science and what isn't.

Thursday, September 13, 2012

Rationalism Precludes Theism

I just had a long Facebook discussion about what it would take for a rationalist to believe in god.  I raised the question because the better we know exactly what sort of evidence would be required for rational theism, the more justified we are in not being theists.  It turned out to be very difficult to imagine what evidence would suffice.  In the end, I was able to prove that there are no conditions under which it would be rational to believe in god.  This surprised me, so I thought I’d share my argument.

I'll start with bunnies. One person said they’d believe in god given fossil evidence of Cambrian rabbits.  That seemed pretty weak to me at first, but I thought I should at least think it through.  I'm imagining that tomorrow morning I wake up to coffee and NPR, and find that the main story of the day is a claim that archeologists uncovered fossils from the Cambrian. My first thought is, "Simple mistake. Someone misrepresented information, got confused, fabricated evidence, etc." I do some research. It probably is a simple mistake. But suppose it isn't. Next, I think, "Earthquake anomaly." That seems pretty likely. More research. Along these lines, I entertain increasingly unlikely hypotheses (in careful order). "God did it" is nowhere near the beginning of the list. Part of that is because I'm not sure what it means, but I'll get back to that. I'd be getting near the neighborhood of god territory about the time I started hypothesizing that Earth is an alien science fair project and the rabbit fossil is left over from a test run that got a little messy and wasn't cleaned up all the way. That would indeed involve an intelligent creator of the human race, but it's quite a long way from, say, omnipotence, omniscience, omnipresence, and omnibenevolence.

The first problem with imagining sufficient evidence for belief in god is this: There are a whole lot of things we could mean when we say "god exists".  Not all of them are equally likely. Nor does one kind of evidence justify belief in all of them. "God" is fuzzy. Much like bunnies. It's semantically ambiguous and vague.  So if we want to know what it would take to reasonably believe in god, we’re going to have to figure out what it would take to reasonably believe in a pretty diverse range of entities individually.

That's one of the most frustrating things about talking with theists; they're quick to tell you what they don't mean once they've determined you're arguing for a god in whom they don't believe either, but they usually aren't so quick to pin down what they really do mean. When you try to reason with a theist, therefore, it’s a good idea to ask them explicitly what they mean by god even before you tell them that he doesn’t exist.  With many you get the impression that they themselves don't know that they mean. You'll talk with them for a long while, thinking you're getting somewhere, and then when you bring them to a conclusion they don't like but can't avoid, they say, "Well sure, but that's not what I mean by 'god'. What if god is really x?"

Legend has it that Paul Spade was once teaching a seminar on the philosophy of theology when someone pulled one of these. Another student gave an exasperated sigh, turned to the first student, and remarked, "Look, what if god is a garage in New Jersey?"

This succinctly expresses a rationalist’s frustrations with fuzzy notions of god, but let’s see what happens when we take the question seriously.  If god is a garage in New Jersey, convincing me of his existence is a fairly simple matter. I already have an awful lot of good reasons to think that there are garages in New Jersey, so showing me a picture of the particular one you're talking about would be plenty.  But this form of theism is neither interesting nor useful.  I really hope conceptions of god never get so boring as to be confined to garages in New Jersey.

So now let’s look at the somewhat more serious kinds of gods who are merely responsible for purposefully creating humans.  In light of the many observations about the universe we've so far made and systematically evaluated through science, it is tremendously unlikely that the human race was intelligently created.  Finding rabbit fossils would indeed be evidence for intelligent creation, because the probability of intelligent creation would be slightly higher after throwing large chunks of our model of biology into doubt.  But it's horribly weak evidence, especially relative to its strength for alternative hypotheses that are far more in line with the vast majority of what we've so far observed. It would be utterly irrational to believe even in the very weak meanings of god on the basis of Cambrian rabbits.  (Obviously, this isn’t evidence at all for garage-gods, since garages are equally likely to exist whether or not there were rabbits in the Cambrian.)

If god is simply any conscious thing that purposefully created the human race, then here is an example of what would convince me. A very long-lived alien could land on Earth, show us the blueprints, and explain how it did it and why. Well, that wouldn't quite be enough, because the alien could be lying. (I mean, come on, you're a brilliant alien who's run into an extremely credulous species that likes to worship even evil gods. Honesty, or godhood? I could see lying.) But if we took those blueprints, showed that they account for all pre-existing observations, and made some predictions based on them whose truth would be in direct contradiction with our current model, then we could test those predictions and the right results would convince me that we were in fact created intelligently by this alien. Which, by that definition, would mean I'd become a theist.

But for meanings of god that are bigger than this (for instance, a being that is omnipotent), I run into the following problem. It is much, much more likely that there exists a being who is capable of causing me to experience whatever it chooses, regardless of what's actually going on outside of my head, than it is that there's a being who really does possess such properties as omnipotence and omniscience. Why?  Because of conjunction. 

For any events x and y, the odds of x happening cannot be greater than the odds of x and y happening.  To figure out the base probability that x and y both happen, you multiply the odds of x by the odds of y.  Odds are expressed as percentages or fractions, so you’re multiplying something less than one by something less than one, which makes the product even smaller than either factor. 

It would take a definite, finite amount of power and/or knowledge to appear infinitely powerful or knowledgeable.  There’s a certain set of things you’d have to know or be able to do in order, say, to run a computer simulation of a lifetime’s worth of human experience.  There is probably a very large number of things you’d have to do, and many of them may be awfully improbable, but because the set isn’t infinite, the probability isn’t infinitesimal (provided the set is well founded—that is, no item on the list requires that you be able to do all the things on the list).

A being with those powers could cause me to experience what I would ordinarily take to be evidence of extraordinary things. There is a certain degree of extraordinaryness beyond which it becomes less likely that the thing I’m experiencing is actually happening than that someone is purposefully monkeying with my subjectivity. For instance, perhaps I am actually a program running on the hard drive of some human’s computer from the future.  Perhaps the future human is amused by the game of creating consciousnesses solely for the purpose of messing with them. That would have to be sort of an evil person, but I must admit it's exactly my kind of evil.
But is a creature with the power to create such a simulation rightly called a god? If so, then any experience (or group of experiences) beyond the subjectivity-monkeying threshold would make me a theist. But this god is infinitely less powerful than an omnipotent god, so again, that's a long way from the god most theists seem to believe in.  They want a god who can do anything.

I'd planned to claim next that only an a priori proof for any god less likely than the monkeying version would do, but it now occurs to me that even that would be insufficient  With a slight modification, the monkeying-god becomes Plato's evil demon.  

Plato described a demon whose only purpose in life is to make us miscount the number of sides on a triangle.  It could be that there are not actually three sides to a triangle, provided that every time we try to count the sides of a triangle, we make a mistake.  This problem is bigger than triangles.  If the monkeying god can control every aspect of my subjectivity by changing lines of computer code, he could cause me to reason incorrectly about even an apparently iron-clad mathematical proof.  And this, too, would be much more likely than anything even close to the god(s) of the theists.

Note, by the way, that even the first version of the monkeying god isn't necessary for experiences of direct revelation. If an experience could possibly be caused by a malfunctioning (or strangely functioning) human brain, it's not sufficient evidence for theism. Simple hallucination happens all the time. I came up with the monkeying god to account for experience that couldn't be pathological. Here's an example of the kind of experience I'm talking about (adapted from a splendid scene by Eliezer Yudkowsky in Harry Potter and theMethods of Rationality).

You hand a very large list of prime numbers to a friend and tell him to select two four digit prime numbers (without telling you what they are) and write down their product. He returns a paper on which is written "16285467". You walk outside directly afterward, grab a shovel, pick a random chunk of ground, and start digging. Five feet down, you hit a rock. Upon examining the rock, you find that it contains fossilized crinoid stems on the surface (and may or may not contain a rabbit in the middle, presumably from the Paleozoic this time). On one side, the crinoid stems are configured to write out "2213". On the other side, the crinoid stems say "7359".  Actually imagine that this has happened, and imagine how you would react.  “I must be hallucinating” probably wouldn’t satisfy you, for you lack the ability to factor eight digit numbers in your head.

Now, this isn't a perfect example, because it wouldn't be impossible to hallucinate this of your own accord. But it would indeed be incredibly unlikely (literally), far more so than anything people experience when they claim to communicate directly with god.  I'm not sure whether it would be more likely that an external agent is messing with your mind than that you happened to hallucinate it accidentally.  Or that you're actually that damn good at prime factorization.  Or that you multiplied every set of pairs of four digit numbers with one member less than half of 16285467 without noticeably aging and then promptly forgot about it.  But if it happened several times in a row, or many similar things happened, at some point the pathology position becomes untenable and it's time for the monkeying god hypothesis to step in.

Therefore, it's never rational to believe in an Allah or New-Testament-style god, because whatever your reason for suspecting that god is responsible, it’s more likely one of the less powerful versions of a god is the cause.  

I'd originally intended to figure out exactly what it would take to convince me of the existence of something like the Catholic god, but it appears this really is a special case.  Even if god does exist, there simply are no conditions under which it's rational to believe in him (unless you're willing to give the name god to something more like a garage in New Jersey).

Monday, August 27, 2012

Testing Is Bad

My father is a high school science teacher in the US.  Today he was feeling a bit overwhelmed by work, so I helped him grade tests from Bio 1, an introductory biology course for kids in their 9th or 10th years.  It’s early in the course, so they haven’t moved on yet from attempts to make sure everyone’s familiar with very basic and central notions that they should have learned in earlier years but likely didn’t.  I only graded fill-in-the-blank and definition type questions—no essays or short answers that would require significant interpretation.  I’ve never met the kids in the class, and since I only graded page two of the tests I didn’t even see their names.  Neither is this a common occurrence: I believe I’ve helped Dad with grading one other time in my whole life.  Just in case some readers wanted those disclaimers.  Anyway.

It was an enlightening experience.  It was very clear that the vast majority of the students were far more focused on exploiting the system during class and homework than on understanding the material at hand.  They were trying to learn what they needed to pass the test, and didn't feel at all that the test is merely evidence of what they've so far understood or failed to understand.  Their whole purpose as students is to pass the test.  

This is how I ended up grading one test on which the student defined "element" as "part of an atom which makes up an element".  I thought for a long time about what would have to happen in a child’s head for him to give an answer like this on a test.  When I was in high school myself, I never thought very hard about the minds of other students, and assumed people did poorly in school because they are stupid and lazy.  But now, I see that something else is going on here, something caused not by the stupidity or laziness of individual students but by a grave systemic flaw in US education.

There are two correct answers in this context to "define element".  The first is something along the lines of, "something without parts" (Dad often teaches via the history of science, so this would come from the ancient Greek notion), and the second is, "a substance made up entirely of one kind of atom".  I took Dad’s intro chemistry class way back when, and I remember his wording.

Here is the real problem.  It is threefold.  First, the students don't understand the goal of their lessons—they don’t know how to know what the teacher wants them to understand.  Second, they don’t know how to assess the content and level of their current understanding—they don't know how to know what they don't understand.  These combine to create the third part of the problem: they cannot identify the gap between what they don’t know and what they’re meant to know, so they can’t focus their academic efforts on closing it.  

As it stands, high school students know what tests tend to look like and how to streamline the process of passing them.  They are rewarded for good performance and punished for poor performance, and no one has ever tried to explain to them the internal mechanisms of learning beyond that.  The reason they run into such huge problems with Dad's classes in particular is that his tests require a great deal more understanding as a prerequisite for good performance than do the tests they’ve encountered previously.  

The kind of test you write if you don’t want to spend much time grading—that is, understanding the minds of individual students—is the same kind of test you pass by knowing how to take tests.  An expert at test taking can pass a test over very difficult material without actually understanding the material provided the test is written in a way that allows them to exercise their expertise.  This is how I got a B+ on a college level psychology final last year without ever going to class or studying.  There were many things on that test whose answers I didn't really know, and sometimes I didn’t fully comprehend the question itself, but I could deduce what would be counted as correct in most cases because I know how to take tests.  Multiple choice, for instance, hardly ever requires understanding in most contexts.  It only requires memorization of associated sets of terms.  It’s a skillset that takes a long time to develop, but nine or ten years is plenty long.

So the poor kid did exactly what he’d been conditioned over the course of a decade to do.  He threw together "part", "made up", "atom", and "element" into a grammatically well-formed sentence, and didn’t even notice that it was totally nonsensical.  It didn't occur to him to actually try to understand what "element" means.  

And why would he?  Imagine that you aren’t simply trying to be efficient so you can spend your time on other things that are more obviously worthwhile, which is itself understandable.  Imagine that experience has shown that you aren’t smart enough to understand complicated things even when you try.  This is a pet theory you pulled together after failing tests repeatedly early on.  It makes a lot of sense to spend what cognitive resources you know you do have on exploiting the rules of the system, getting by without anyone suspecting that you’re failing to learn (including yourself) and without being punished for your failure.  Your teachers believe that a good grade means you’ve learned, and you believe your teachers.  Because you’ve been doing this for as long as you can remember, you don’t even recognize anymore that there’s another way.  

This kid gave an answer that evidenced an almost total lack of understanding of anything that had happened in his biology class up to that point, but it's not because he’s dumb, and it's not because he isn't trying to succeed.  He definitely would have had to have studied to give that particular answer.  He's failing to learn because tests have taught him not to learn.

What if instead of doling out rewards and punishments in the forms of grades for being able to answer correctly on tests, we taught kids how to assess their own understanding?  What if we taught them that the first priority is to figure out what it is the teacher wants you to understand, the second is to figure out in what way and to what extent you currently understand it, and finally that the entire purpose of all of this class time and work and testing is to figure out how to close the gap between those two things?  There's no way anyone would be content with a nonsensical answer.  They’d have written what they did understand about the meaning of "element".

A few students seem to have done something similar: they defined element as something like, "all the things on the periodic table".  This is what I'd expect from kids who didn't know how to know what they were meant to understand, but did know what they understood.  They knew that they knew that the things on the periodic table are called "elements".  They knew that they knew what a definition is.  They failed to give a correct definition because they didn't know what they were meant to understand.

Here is an answer I would expect from someone who knows how to know what he's meant to understand, knows how to know what he currently understands, but hasn't quite completed the process of closing the gap.  "An element is a very tiny thing that builds bigger things and takes part in chemical reactions."  A kid who answered this way would have genuinely been learning about atoms, but wouldn't have finished refining his notion: he'd have yet to precisify his understanding enough to distinguish between elements and molecules.  

Not a single student gave this kind of answer.  In fact, I don't think anyone gave this kind of answer to any of the questions.  This suggests that even the kids who are getting the answers right probably don't actually understand the things the understanding of which the test is meant to assess.

People like me, people who love learning so much that they aspire to be professional academics, learn in spite of tests.  In most cases, we grew up believing ourselves to be so much smarter than everyone around us that we were always confident that if someone else was meant to understand, we sure as hell were going to understand as well.  We had confidence in our ability to learn better and faster than required, expected, or maybe imagined.  When faced with the prospect of a test that presented any sort of challenge, we stepped up our efforts, because we knew it would pay off.  By contrast, many students have little confidence not because of low ability but because of learned helplessness.  We did learn to exploit the system because often we just weren't interested in the material, but we never had to deal with a feeling of doubt about our abilities or intellectual worth. 

I think that not only have most people never been taught to apply what intelligence they possess, but they've been taught specifically to behave less intelligently than they would if left to their own devices.  They've thrown in the towel, they're flying blind, and the best they can do is to try to exploit the system, and to pray.

For a boatload of unequivocal empirical evidence that conventional testing is harmful, checkout this ginormous meta-study by Paul Black and Dylan William.  If you’re convinced and want to know what to do about it, I suggest reading up on formative assessment, a good overview of which by D. Sadler can be found here.