cloud computing

My day job involves “the cloud”, offering services using rented virtual computing provided by large datacentres. This post is not about that.

This post is about the other kind of cloud.

More generally it is about how people can be wrong – or at least have different intuitions from my own – about one specific (but often hard to nail down) thing, and then build elaborate theories on top of this apparent mistake.

It’s important because I can’t automatically fact check my entire brain, and so a large chunk of what I believe might be predicated on some weird fact or intuition that turns out to be wrong. This isn’t something I can study directly, but I can study it in other people under the ongoing assumption that, just sometimes, it’ll be the other person who ends up being right.

Generally speaking, the go-to example of a smart person being very wrong is Roger Penrose.

Penrose contributed a ton of stuff to our knowledge of physics. I won’t pretend to understand any of it myself, but it seems to be valuable, well respected and ultimately uncontroversial. In other words, this is a smart guy, not a crackpot, and presumably capable of understanding logical arguments.

His Big Idea though, the one which pisses so many people off, has to do with physics and consciousness. There are two sides to the argument. One is a mechanism – which is something to do with microtubules. The mechanism sounds a lot fancier than what could be realistically deduced from experimental evidence, but I’m not an expert so maybe that isn’t the case. The other side though, which seems to be to be the driving motivation – is the Gödel stuff.

Gödel’s incompleteness system states, roughly, that there are three kinds of formal system:

  • those which are unable to prove their own consistency
  • those which are too boring and rubbish to even formulate the question of their own consistency
  • those which are inconsistent.

An example of the sort of system we’re talking about is first-order logic plus Peano arithmetic. It can be described completely as a formal system – whether or not you can prove something from something else is just a question of whether you can get there following some very mechanical and precisely defined set of rules. As far as we know it’s consistent, and if it weren’t then that it would be a huge surprise. It certainly has enough expressive power to formulate its own consistency – you just write down formulas as big numbers and express the rules of logic as annoyingly convoluted arithmetic operations. And so Gödel’s incompleteness theorem says that it can’t be used to prove its own consistency. If you want to do that then you need a stronger system to do the proving from the outside – a system which, necessarily, you have less confidence is actually valid.

The proof is something like:

  • Take the statement “this statement can’t be proved”.
  • It can’t be proved, can it? Because if it could, that would mean that it’s true, which means that it can’t be proved. <– Gödel’s First Incompleteness Theorem
  • But this sounds a lot like a proof that it can’t be proved! Which equals a proof that it’s true! Which is the opposite of what we just said.
  • The only way out of this is that there’s an implicit assumption we know that our own system of logic is valid, and so it turns out there’s not actually a contradiction there but in the process we have to give up any hope of proving our own system valid (or more precisely, proving it consistent) <– Gödel’s Second Incompleteness Theorem

Obviously the actual statement and proof of the theorem is a bit less crass than that, and a lot more technical. But it’s still something that someone trained in mathematics can understand.

To see where Penrose is coming from you have to imagine an environment where everybody totally understands Gödel’s incompleteness theorems, or at least thinks that they do.

Now imagine that one of these people’s minds can be exactly described by a formal system. Such a person would look at the formal description of their own mind, somehow recognize that it’s themselves, and say “oh yeah that’s totally consistent”. This is a contradiction – because we’ve said formal systems can’t do that – hence our original assumption is wrong.

This is total bs of course, since in no way is any human mind consistent (in the logical sense) in terms of what claims it can come up with. And neither would we reliably recognize a schema – which in practice would be a ton of garbage stretching into the terabytes and beyond – as being a faithful description of how our own mind works. And people have explained this to Penrose, and he listened, and yet continued not to change his mind. What was going on?

The next example is Stephen Wolfram, the inventor of Wolfram Alpha. He has some cute ideas about how our universe is a big cellular automaton, which have a lot of appeal but seem slightly suspect once you notice that our universe has a lot of symmetry to it – you can turn an object some arbitrary angle or push it so it starts travelling with some arbitrary relative velocity – and it still behaves the same way. This is basically never true for systems made up of a huge grid of little cubes. This might fall into the category of mistakes that I’m talking about, but I want to focus on something else.

Try downloading a cellular automaton simulator such as Golly, and try playing with the rule set. By default it will be set to Conway’s Game of Life – a system with unusually pleasing and almost obsessively well-studied behaviour. A live cell with two or three living neighbours continues to live to the next generation, and a dead cell with exactly three living neighbours becomes living on the next generation. But from within the software you can play around with those rules – what about if instead of 23/3 you choose 34/34? Or 123/123? Or 345678/45678?

Generally you’ll discover rules fall into some basic classifications. There are rules where everything dies out, with patterns either disappearing entirely or quickly disintegrating into some stable but boring turds. Or in other rules, every pattern explodes until your entire screen is filled with garbage, usually with very little in the way of stable structures inside. Some rules cause patterns to dilly for a bit before entering one or other of those modes (for example I remember 34/34 as exploding pretty reliably, but some small patterns would oscillate or die out instead).

In the middle there are the rules with the most interesting behaviour, of which Conway’s rules are the shining gold star. Patterns won’t die out immediately, but neither will they explode – instead you get patches of active slime separated by some more stable boulders, and if you’re lucky some extra goodies like little oscillators or even spaceships (stable repeating patterns which travel across the screen). The active slime will behave unpredictably, sometimes growing in extent, sometimes dying out completely, sometimes shifting this way or that. With enough experience of a particular rule, you notice that certain patterns – stable ones, but also patterns within the active slime – keep reoccurring. Exactly what those patterns are adds a lot to the character of that particular rule.

In the case of Conway’s life, a lot of its erratic activity appears due to a small number of frequently reoccurring patterns, given names like Pi or the B Heptomino. In the end, Conway’s life is a stabilizing rule, with all but a few extremely carefully engineered patterns eventually dying out into collections of up to around 10 still lifes and small oscillators, plus maybe an exotic still life or two and of course flotillas of the iconic “glider” escaping the scene along each of the four diagonals.

That’s all very romantic but there was some point to this and it involved Stephen Wolfram.

From what I remember, in his Principle of Computational Equivalence, cellular automata (generalized to other kinds of system but I’ll stick with them for now) are put into just two categories. There are the ones that are obviously boring and can’t be used to build any kind of computational system for fairly straightforward reasons (like a CA where each cell inherits the state from the cell one square to the right: patterns would drift leftwards over time but could never actually do anything). And then there are the ones that look like potentially with enough effort you could imagine building some kind of Turing machine.

The first part of the thesis states that for most of those kind of rules, you can in fact build a Turing machine. That seems pretty interesting to me. It’s hard to formalize “rules which look sort of like this are almost always Turing complete”, but it’s a nontrivial testable prediction that if true potentially says something mathematically deep.

The second part of the thesis is the more confusing part. Remember that proving something Turing complete means building a universal Turing machine within that particular system. But this can be a very precise engineering effort. You can certainly build Turing machines within Conway’s life, but if you get one pixel wrong the entire thing can disintegrate. They don’t occur “naturally” from patterns of random dots, except in the uninteresting statistical sense that if you have a 10^big sized random thing, you’ll find instances of pretty much whatever you like in there somewhere.

Now imagine that you ignore all of that and say that, because Conway’s Life (and the other Turing complete rules) are capable of simulating Turing machines, therefore anything that goes on in there is essentially “the same kind of computation”. Including consciousness.

Chaotic cellular automata are certainly unpredictable, and have a kind of irreducible complexity to them. If you want to know the state of a particular pixel in 1000 generations time, you basically just need to simulate the entire pattern unfolding for 1000 generations and then look and see what happened. But to my intuitions, a boiling sea of random stuff – in a cellular automaton or weather system or ocean – even if it’s following deterministic rules, on a higher level of abstraction it really is just random. It’s not secretly calculating prime numbers or working out the meaning of life. For that you need two things: you need walls between processes, so that one calculation doesn’t automatically interfere with another (from what I remember this was the main barrier to doing anything usefull with the Malbolge joke programming language, but in that case as with many cellular automata, it could be worked around with a sufficiently precise and disciplined engineering effort).

The other thing you need is some guiding process that selects for things doing interesting things. This could be a human programmer designing regular software or a hobbyist creating some weird marvel out of well-studied building blocks in Conway’s Life. It could be evolution creating us living things. But it doesn’t just happen all day in all things, like in the soup that you’re stirring. Wolfram seems to think that it does, so either I’ve misunderstood him or we disagree.

Another thing smart people seem to be wrong about, and in this case I’m not singling anyone out in particular, has to do with estimating the code size of a human-level artificial intelligence.

The argument is this: a reasonable upper bound can be given by the size of the human genome, which is less than a gigabyte.

What I’m arguing against is not this (which in fact I agree with) but the counterargument that no: in fact it does not work like that.

It’s important to note what this argument is not saying. It’s not constructive, in the sense that it isn’t saying the way you would go about building a humal-level AI is to try and simulate an actual human and that most of the data involved in that project would be the genome itself.

That version of the argument does indeed seem wrong, since you would need to also simulate the cell that the DNA is put in, and running a search algorithm that searches through different kinds of cell biology until it finds one where this particular DNA sequence is a kind of fixed point, seems pretty ridiculous. So you’d need to put into your program some description of how your initial sperm and egg are structured which might itself extend into several gigabytes. You’d also need to program in the laws of physics (which are probably pretty small but we don’t actually know how they all work at a theory-of-everything level, so a working approximation of them may be rather large). And you’d need to program in a lot of hacks as to what exactly you can get away with ignoring in order to simulate the whole system at anything like reasonable speed and memory usage.

The fact that we’re not saying this, but are saying something that sounds similar, may be one reason why it’s hard to communicate this intuition. Like, if you don’t want to change your mind about something, the fact that the alternative is near to some obviously ridiculous strawman version is helpful in ignoring it. But still, people advocating for the <1GB AI hypothesis carefully explain that that isn’t what they’re saying, and still don’t get through.

Imagine the following program, in a hypothetical programming language:

  • ai( level = ‘human’ );

That’s around 22 bytes long, and satisfies our requirement of describing a human-level AI. But it’s also clearly cheating. Such a program would only work correctly if the language already had a built-in library that does exactly the thing which we need it to do. In practice no language will have this: instead it will contain a bunch of general purpose arithmetic and data structures, and anything more complex we need to build up with a great degree of precision from those basic blocks.

So our embryo’s DNA works by telling its cells how to build a brain and other supporting infrastructure like a heart and digestive system. The cells are complex to begin with, yes, but there doesn’t seem to be any sense in which that complexity helps with designing a brain. They don’t each contain a secret library of algorithms which can be implanted into the brain via some kind of magic – any such algorithms are coded, albeit extremely indirectly, into the DNA.

Also, what makes a brain smart is not just its initial programming but what it learns from interaction with an environment. An artificial intelligence would have access to an environment too though – my specification didn’t say it has to be at human level as soon as you hit run; an initial training period is ok. So I don’t see this presenting any kind of a problem.

What do these examples all have in common?

Firstly, I’m not able to explain the counterarguments particularly well. If I were to get into an actual argument with a person about any of these, I could be quickly rebuffed by taking the argument into the technical weeds until I no longer knew what I was talking about. But they seem like the sort of thing where you should be able to say “no you can’t win this argument just by knowing more about the subject than I do”. Like in each case I have an actual point, and in each case if I’m wrong I should be able to understand that without needing to learn a whole bunch of only indirectly related things.

Not everything is like that.

An example where it wasn’t would be Eliezer Yudkowsky being surprised when the Higgs Boson turned up. His thinking had been: based on various clues, these quantum people seem overconfident so any specific prediction – even one which is mostly agreed upon within the community – is probably wrong. In this case actually they were right, and the reason seems to be that actually if you understood quantum field theory as well as they did, then the Higgs had to pretty much be there.

I’m still holding out hope that it’s possible to explain the basics of quantum field theory in a way that would make sense to people who don’t have that exact specialization, but nobody seems to have managed it yet. It seems to be an area of enquiry where there’s a huge amount of difficult math you need to learn, and once you do you can start saying sensible things about the subject. This is annoying, but it’s at least useful to be able to correctly classify a discipline that way.

It does not seem to be the correct way to classify the study of cellular automata, or the computational complexity of AI, or the mathematical physics of consciousness. In all of those cases there is a certain amount of knowledge or experience that helps with making sensible statements and predictions. In the Penrose case it’s an understanding of Gödel’s incompleteness theorems, which can probably be explained by a patient expert to a patient non-specialized smart person in the course of an afternoon. In the Wolfram case it’s about having played with some cellular automata to see what they do, and maybe gained an understanding of how the various constructs in Conway’s life fit together to the point where you could imagine trying to build a Turing machine with them. In the 1GB DNA case, I’m not really sure.

Poking around with things that look interesting, such as cellular automata, can certainly train your intuition. But it’s not the same as a scientific discipline. Stephen Wolfram has been poking around with cellular automata a lot more than I have, but if this involved a learning a bunch of mathematical theorems then I’ve never heard about it. If Penrose’s expertise in physics leads him to a better understanding of what the hell Gödel’s incompleteness theorem has to do with consciousness, then I don’t really see why. If knowing more about how cells or neurons or childhood development works than I do also leads you to estimate the code complexity of an AI as being 100 times greater – and being right about that – then I’m confused as to why.

So the disagreement seems to be happening on something of a meta level. It’s not just “you’re saying something that sounds wrong”. An example of that object-level disagreement with an expert would be learning about quantum mechanics for the first time. It just doesn’t seem to make sense that whatever’s going on with particles couldn’t be explained by some local hidden variable theory. But the people who study it say there’s some particular math and some particular experiments and once you understand those you’ll see that your classically-trained intuitions really are just wrong. And having read a lot of poorly written stuff about it on Wikipedia, I think they’re right and I think I understand why.

But even when I didn’t, I could understand that there was some expertise gap that needed to be crossed, with maybe a secret pot of knowledge on the other side. In the case of the examples I mentioned here, I don’t even get that sense. It’s like I’d need to gain a sense of “why you’re the sort of person who would be right about this thing that seems like it should be wrong” before I could take that person seriously.

To make it worse, people for the most part don’t play fair when debating. I want to write more about what I think that means later. Also making it worse is when a rather large number of equally well qualified people agree with me and vocally disagree with whoever’s the target of my particular gripe.

It’s so easy to just pick a side and assume the other side is talking nonsense. For some people it’s also all too easy to go for total relativism, and say maybe there’s a sense in which everyone’s right? Or go the enforced ignorance route where you say there’s no way we can even guess as to the answer to this one. Out of all those traps, I think I’m most likely to fall into the “My people are right and your people are wrong” trap.

It gets really freaky when you actually do start being right more often than chance would dictate. I’ve seen little glimpses of that effect, but I don’t think I’m at that level yet.

So how does this relate to EA?

I’ve traced out a classification scheme for disciplines. There are the ones where experts know what they’re talking about and everyone else doesn’t. And then there are the ones where you can at least start meaningful discussions from a position of dabbler.

On the face of it, the study of how best to help people should be in the first category. There are people who devote their entire lives to either studying or carrying out activities you might describe as altruism or aid. And from the outside it all looks really complex, and to make any real kind of a difference you somehow need to become an expert in all these different things.

Then the effective altruists come along and say, hey that all sounds really hard why don’t we just leave it to a small set of researchers, who can then tell us where we should give our money? And then we’ll give our money there, and not really have to think about the problem much except keeping an occasional eye on things to make sure our appointed experts don’t become corrupt or useless.

This was a blow to my original worldview, and it came from an unexpected direction. What I would have expected is that attempting such a system would be pointless, because all the high-value giving opportunities would become fully funded as soon as anyone noticed them. All that’s left for us non-experts would be sifting through a large number of much-of-a-muchness opportunities, deciding which ones square best with our value systems and our beliefs as to how the world works.

I would have expected the criticism of the Against Malaria Foundation to be “no way can they save a life for $3500. The market value of a life is at least $1m”.

So it’s a little unsteadying that the criticisms of effective altruism haven’t gone out and said that. Of course if they did, they’d quickly run into trouble because the research into AMF’s effectiveness is pretty thorough. It’s also concerning (and maybe exciting too? because it suggests an opportunity?) that charities’ annual reports don’t really tend to say anything about how effective their programs really are. In my original model, where everyone was striving for effectiveness all the time – that should be a staple of every charity’s boasting. Here in reality it’s not so.

My tentative conclusion is that the effective altruists have got it right. It’s a tough one though, because there are only a few thousand of us out of a target population of hundreds of millions of reasonably well off people. Thinking that you’ve got something right that 99.99% of the population have got wrong is a really bold claim. I don’t have any outside view reason for expecting to be that correct. You might be able to come up with some measure of intelligence that would put me in the top 1%, but that still doesn’t give you enough nines.

Of course EA is growing, and logically some people have to be the early adopters of any new philosophical or social movement. It’s possible that EA will turn out to be basically right and also hugely popular. It’s possible that it will turn out to be wrong for some reason that still has me confused.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s