oh yeah how could I forget Robin Hanson!

He’s one of the most basically-right-about-everything-except-for-one-or-two-crucial-details people I know of.

I’ll talk about him and Holden Karnofsky here, since adding these two examples to the 4000ish word thing wouldn’t have added anything to the general point I was trying to make.

Trigger warning: fairly crass descriptions of transhumanist concepts

Robin Hanson believes the world is going to be taken over by digital emulations of human brains, and that these will be the driving economic force for a sufficiently long time to be described as an “age”, before some poorly specified subsequent transition happens.

He believes this with some qualifications, but presumably he believes it with enough probability for it to be worth writing a whole book about.

So far so good. Ems at least have some vague sense of plausibility about them, for those of us who believe the brain works basically like a weird computer anyway. His argument is that emulations can be brought about by taking a recently dead person’s brain out of their skull and then throwing a lot of technological grunt at it. This grunt includes the ability to slice things real thin without them going all floppy and crinkly, and subsequently imaging them at a lot of resolution and making enough sense of what you’re seeing for fully functioning and correctly connected neurons to appear on your computer screen. Also a lot of understanding of how the neurons themselves work, which I guess you can get by taking some out of a mouse and putting them in a dish and poking them to see what they do.

Regular artificial intelligence on the other hand requires some unknown number of insights as to what the brain is actually trying to do. Hanson’s argument is that this means ems will probably happen first.

I’m not sure I really buy that. I can’t see anyone sensible putting the relative probability of ems vs. AI at much higher than say 65%/35%, given how little we know about either and that AI is already driving cars and playing go while ems aren’t even at the nematode level. But whatever Robin thinks the probabilities are, even if they’re quite close to 50/50 I can’t really expect him to write another book saying “ok what if I’m wrong and AI comes first?” since ems are what he likes researching and there are other people writing about AI.

OK fine.

The first problem I have with em world, though, is that most of us won’t get to enjoy it. It’s a world populated by valedictorians, by people with the exceptional skillsets necessary to be worth making lots of copies of. It’s possible that there’s a sense in which people of at least my level of ability are the best in the world at some very, very specialized thing and hence our own clan could survive it that particular economic niche. Or maybe not. Even if it is though, you’d have to give up the lifestyle of being an actual meaty humanoid to make it in this world.

But Hanson thinks no! The ems will look after the original humans in the same way that we look after people in our society who are too old or otherwise incapable of contributing economically. This seems like wishful thinking to me, as firstly there are segments of society we treat like shit anyway despite being the exact same species. Secondly the reasons why we have a social safety net are somewhat complex and don’t seem to be a human universal, they don’t extend across species barriers and only somewhat across national ones. It’s not clear that these reasons would extend to ems looking after us. We wouldn’t even look like people – we’d barely be moving, and if the ems mostly lived in virtual worlds we’d be invisible too.

At some point some people must have told Robin about this, because when I asked him about it at a Singularity Summit, he seemed visibly irritated as if he’d been asked the same question a dozen times that day and probably had a pithy answer already on his blog. But still… you really think that Robin?

The other problem with em world is the lobotomy problem. We may not know the details of how the brain really works right now, but as soon as ems enter the picture that’s going to change real fast. A lot of the pressure to get brain emulations working is the idea that they could be used for medical research, ethics be damned. Once a brain is on the computer you’re limited only by your imagination (and by having a suitably large army of technicians, and your ethics) in what you can do with it. Wanna try removing a piece and see if the rest still works? Go ahead just don’t tell the ethics committee you did that. Wanna put the person on some drug that improves mental performance, but has annoying side effects once it reaches a certain part of the brain? Just tell the computer not to let it enter that part of the brain. Wanna tweak the behaviour of neurons in other ways where we don’t have an actual chemical that can do that thing? No problem. Wanna try making the brain gradually larger, stuffing in extra neurons as you go? It probably wouldn’t do what you wanted but maybe if you tweaked around with the parameters you could get it to. Want to try connecting together parts of the brain that wouldn’t even make sense in Euclidian geometry?

Our brains were evolved to deal with all kinds of constraints like having to have sex, navigating complex social hierarchies, having to obey the laws of physics and chemistry, and having to fit through the birth canal. Those kinds of things cease to become relevant once you’re digital and your reproductive success is determined purely by your ability to serve business needs.

So I anticipate the “ems” that actually drive the economy will cease to become faithful copies of human brains extremely quickly. If there are economic benefit of dicking with a brain – either by removing unneeded parts to reduce computational costs, or adding weird tricks to augment intelligence and performance – then funding the programmers, technicians and neuroscientists needed to carry out the work (who would themselves be ems) should be straightforward. Being a new field of inquiry, initial breakthroughs should come thick and fast.

These are “breakthroughs” in the Moloch sense of driving forward businesses who exist purely to compete with and outgrow other businesses, not in the sense of things which will improve overall welfare.

And pretty soon there might be so many of them you barely recognize that what you have used to be human. It doesn’t look like one in terms of the structure of the brain, it doesn’t act like one. Is it still conscious? Is it having a good time? Who really knows, it’s just very very good at some specialized task in the economy.

So that’s that.

In cases like this, either the person concerned (Hanson in this case) is just being wilfully obtuse (which seems unlikely) or there’s some underlying point of disagreement which no-one involved in the discussion actually manages to bring to the surface. A good debate should have a moment of “you believe THAT? I didn’t know you believed that, or even necessarily have a concept that that was a thing people could have different beliefs over”. I don’t think this has happened yet with Hanson, or with the people I mentioned in the previous post. Eliezer Yudkowsky has certainly tried to look for one of those in the Foom Debate, but from what I remember they were mostly talking past each other the whole time and it didn’t really seem like a model of how self-identified rationalists should conduct debates at all.

But what of an example where somebody did change their mind?

I’m going to pick on Holden Karnofsky (2012). He didn’t like SIAI (now called MIRI) very much.

There were two main reasons for this. Firstly, the organization itself seemed somewhat dysfunctional. Most people agree he had a point there – the extent to which he was ever wrong about that is merely a matter of degree. And most people seem to agree they’ve got better over time, too.

The second reason is the interesting one. Karnofsky had noticed that most people you’d regard as experts – the artificial intelligence research community – did not seem to be taking SIAI very seriously. He assumed this was because there were some arguments against the whole intelligence explosion or unfriendly AI concept, or against the value of trying to counteract this with research into the foundations of friendly AI. He had a few of his own ideas about why this might be, but was aware of his own lack of expertise in this area – it seems like the lack of enthusiasm from experts was the bigger factor.

This is interesting to me because it’s analogous to my own experiences with Big Philanthropy, but seen from the opposite side.

My own experience with Big Phil or Big Aid is (or was) that there was this big community of people with lots of domain-relevant knowledge, and they seemed to have very different intuitions from this upstart community of effective altruists. I assumed they knew something important that the EAs didn’t, and had a few guesses as to what it might have been.

Karnofsky saw this big community of machine learning researchers who had lots of domain-relevant knowledge, and seemed to have very different intuitions from the upstart community of SIAI and their groupies on Less Wrong. He assumed that the machine learning researchers knew something important that Yudkowsky didn’t (or at least was choosing to ignore).

I don’t think I’m cherry picking similarities between the setups. I think they are actually kind of analogous. The twist is that Karnofsky knew a lot about aid, having been doing Givewelly things for several years. And while I’m not an AI researcher, my career has been in software engineering and I have some comp sci, which should at least push my intuitions in the right direction.

The moment when I took both SIAI and GiveWell seriously was basically the same moment when I realized the apparent experts couldn’t always be trusted.

This really seems to be true – the best example is economics, where experts not only can’t agree on things which sound fairly basic, they also seem totally convinced that they’re right and don’t present arguments from the other side. It’s all very fishy, and points to some kind of a problem that seems to extend into philanthropy, machine learning and maybe lots of other things too.

It’s really hard to get my head around though. Culturally we don’t have a narrative for the experts being wrong – or rather there are plenty of countercultures that will tell you that but it’s tied into some other narrative – such as a conspiracy – which is also probably wrong. More generally, our concept of saying things that are wrong is basically oversimplified into either actively lying about the truth, or just being stupid. What seems to be happening is way more subtle than that. Like arriving at the actual truth is so so hard and you can go wrong in many different places, and sometimes there are actually pressures to be wrong instead of right, and if any of that happens you end up with lots of people being wrong and no active coverups or wilful stupidity.

I won’t go as far as to say that Karnofsky(2012) didn’t know that. But he seemed to have a slightly different concept of it than I did, which is especially interesting as I arrived at this picture in some part because of him and his work with GiveWell.

From Some Key Ways In Which I’ve Changed My Mind Over the Last Several Years by Karnofsky:

I felt that MIRI’s lack of impressive endorsements from people with relevant-seeming expertise was the most important data point about it, and indicated that there were likely strong arguments – even if I did not know what they were – against putting resources into the kinds of concerns it indicated.

I’m sure I follow that kind of reasoning all the time. Like people are saying silly stuff all the time and some portion of that we can’t offer immediate arguments against. Maybe it’s Flat Earth. You know there’s some compelling argument that the earth is round even if you haven’t bothered to look up what it actually is. And occasionally I, and everyone else, will miss something really important because it sounds like part of the crazy noise.

Once the noise about one particular issue reaches a certain level, and you become conscious that you’re following this particular line of reasoning, wouldn’t you at least go and look up what those knock-down arguments are supposed to be?

Well, Karnofsky did. He didn’t find one, and in an interesting move decided to try to come up with his own instead (the tool vs. agent thing). Finally he got a chance to speak with a lot of AI researchers in person, found that they didn’t really seem to know what they were talking about on the issue after all, and changed his mind.

Holden Karnofsky may be unusually good at changing his mind for the right reasons. If this is so then yeah, I’m going to need to be patient with people.

And maybe myself.

There are people who are locked into a belief and wouldn’t want to change their mind even if they were wrong. That’s obviously going to be a problem.

But there’s a large community of intelligent truth-seekers, of whom the Less-Wrong style rationalists or effective altruists are a mere wart on the side. It’s not clear that this subsubculture with which I identify actually has better rationality skills on average. (I remember LW making attempts to determine this, with somewhat positive results, but I may have got that wrong).

It’s possible that the typical member of the community doesn’t have better rationality skills at all but still ends up with more correct beliefs by virtue of taking their beliefs from the right people.

It’s possible that they have a particular knack for finding the right people to listen to, or it’s possible they were lucky to fall in with the right crowd.

In either case it doesn’t necessarily extend to a general factor of rationality. Listening to someone who is basically right about everything will get you some way, but to grow and develop beyond that you need all kinds of extra skills. You need to critically evaluate things, get the right answer somehow, and then change your mind.

It’s also possible we’re all just wrong, that the optimal thing to do is to give a whole bunch of money to Oxfam or to the Gates Foundation if they even let you give them money, or to question the capitalist system that gave us the concept of money in the first place, or try and live some lifestyle where you have minimal impact on anything, or some other widely accepted strategy.

I know I keep coming back to that – what if I’m the one who’s wrong after all – but that’s because it keeps bothering me. And it’s a pretty important part of any essay on why other people seem to be wrong about so many important things.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s