rational debate

When I restarted the local Less Wrong meetup, putting it on meetup.com (an effort which has since perished), I wanted it to be a rational debating club. I wanted it to be debating as a cooperative game.

I haven’t ever engaged in what I’d regard as an ideal rationalist debate, and it’s hard to think of examples involving other people either, so there’s a good chance this concept is impossible or doesn’t make sense for some reason.

But it makes sense for me.

The Center for Applied Rationality occasionally comes up with new tricks, and here’s one that it introduced since I attended (and hence which I might not be describing in exactly the way that they do).

It’s called the Double Crux.

Imagine I think that human-level AI is going to happen soon, and you don’t. We’re aware that we have different viewpoints, just because we seem to say things about the issue that the other person doesn’t really agree with.

This is the first stage: identifying that there’s actually a disagreement about something. This stage is pretty easy.

The next stage is to formulate what you disagree on more precisely. Ideally this should be some statement that one person thinks is true and the other thinks is false. It can instead be one where the two people merely give different probabilities (though if it’s like 30% vs. 40% then you’re probably going to struggle with the rest of the steps because in general we’re nowhere near that good at estimating percentages anyway).

Suppose I think there’s a 50% chance that human-level AI turns up in the next 50 years, and you think there’s a 50% chance that human-level AI turns up in the next 1000 years (with a correspondingly tiny 3%ish chance that it’ll be in the next 50).

That’s a clear and precise disagreement about something factual. Awesome. We should maybe have a brief discussion to make sure we’re clear on what “human-level AI” means, but we shouldn’t get bogged down in it – as long as the details don’t substantially alter our estimates, we’ve cleared stage 2 of the debate. We know what it is we’re actually debating.

I’m not sure what proportion of debates make it to stage 2. Even if you’re not double cruxing it seems pretty fundamental, but whatever I digress.

We also assume we’re doing this cooperatively, i.e. we both actually want to arrive at the truth instead of merely showing off to our friends how wrong we can make the other person look. This is kind of a stage zero in the sense that you shouldn’t even really consider engaging in a double crux debate unless you share this kind of culture with the other person.

At this point though, we need some guidance on how to proceed. We know that if we were to look inside the other person’s head, we would see some vastly different pictures and intuitions as to how the future will unfold, which influence our different opinions on the main question. If we could step into the other person’s mind for just a moment then it might clear up the whole misunderstanding. We can’t do that. So what can we do instead?

There are a ton of related questions we could ask each other. How does intelligence work? Which particular things do you think AI will be capable of doing within the next 20 years? Which experts do you trust? Things involving lead and agriculture and IQ and the schooling system?

To avoid the debate getting side-tracked, we need to stay focussed and filter out the most important question we can find out of all of those. This question should have four properties:

  • It should somehow seem easier to answer than the original question
  • We disagree substantially on the answer to this question, too
  • If I turn out to be wrong about it then I might change my mind on the original question
  • If you turn out to be wrong about it then you might change your mind on the original question

I say “might” here, rather than “will”. I think this was in the original formulation of double cruxing but I’m not sure. It seems like if you commit to definitely changing your mind, it’s going to make it a lot harder to find a question which can act as a double crux. Instead you might find “oh, I changed my mind on this smaller thing, but I still seem to hold the same opinion on the original question. Maybe I don’t understand my own reasons for believing it”.

This is actually a good thing.

The basic idea, as you can guess, is to follow this logic to see where it goes. First you endeavour to answer the simpler question. This might involve recursive descent – so having good working memory, preferably augmented with some mutually agreed-upon diagram of the structure of the debate – is vital here. You hope to bottom out at something which can just be looked up – e.g. maybe “experts are always predicting human-level AI 50 years in the future” turns out to be a double-crux, and you look up the answer to see who’s wrong. Or maybe you bottom out at something you can resolve with some more informal reasoning, or one of you realises that, faced with someone who believes the opposite, you just decide you don’t really believe that thing with conviction any more. It’s not the big top-level thing you decided to debate, though, just some little detail of it that has turned out to be crucial.

Then you follow the logic up. Whoever has changed their mind, if they follow their reasoning up will find they change their mind about the original topic. Or they might be blocked by the “actually that consideration turned out not to be so crucial after all”, which is still a valuable update. If that happens, and you’re still feeling like debating, you can look for a new double crux.

This should lead to either one person shifting their views, or to both people gaining a better understanding of the topic and why people have such divergent views on it.

Want.

Want, want, want, want want.

CFAR promises to offer a cornucopia of rationality goodies. For the most part I have blocked off my enthusiasm for any of their techniques, and the reason is kind of sad. I had my psychotic episode shortly after attending their workshop, and basically anything I changed my mind about that happened around that time – all the crazy psychotic stuff but also the CFAR stuff that came just before – I’ve put a big red X through in my mind. I just don’t believe it any more. And the beautiful experience I had at CFAR – that I had the potential to become a powerful force for good, and had powerful people to help me – has turned sour.

Double cruxing doesn’t suffer from this problem, since I didn’t actually learn about it at CFAR. It’s also a more precise and elegant formulation of how I thought rationalists should debate things anyway.

I haven’t managed to put it into practice though. What would a good topic be? “Yay CFAR?”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s