I’m a straight white guy with a math degree and akrasia. In the most powerful narratives of the world, people like me are not the protagonists.
Change any of those five factors and the story writes itself.
People like me certainly feature in those stories, but as secondary, two-dimensional characters.
What if I’m not the hero of my own story? Or worse, what if I’m not even a fully developed character?
To a good approximation, unless you suffer from a severe learning disability, your mind is roughly as complex as everybody else’s.
When we think of somebody as complex, that isn’t quite what we mean. We mean they have some rich emotional life, or they are able to navigate a complex and difficult social setup, or just that we like them. These things don’t really apply to me.
But it’s important not to confuse this with actual mind complexity, in the sense of writing a program which emulates how that person thinks to a certain degree of fidelity, gzipping it, and seeing how large it sits on your hard drive. I think at some fairly deep level I have these concepts confused.
Meaning something like. Why am I not doing the things on the to-do list that’s sitting right there on the left, some of which have very low numbers written next to them? My alief is something like “because I’m lazy, because I have no stamina for annoying tasks, because I’ll fail because I’m a failure”.
The model looks something like:
- Stuff to do —> Is a shit –> Doesn’t want to do stuff —> Stuff doesn’t get done
The model is fundamentally judgemental in nature, and also very simplistic. It generates the correct prediction of “stuff doesn’t get done”, but there’s no sense in which the model could get refined into a better predictor of when things do and don’t get done. Three weeks ago when I wrote the first version of this list, it was followed by a burst of activity. Why did that happen? The judgey model is silent on such matters.
But still I cling to it.
Why not some other model, like this:
- Stuff to do —> a ton of psychological stuff –> Stuff doesn’t get done
This model, as presented here, is equally simplistic in its predictions. But it removes any element of judgement, and it opens itself up to further refinement – out of which useful predictions may, or may not, pop out the other end. And if they do, that opens the door to strategies.
One of the main things I’m trying to accomplish with this writing is to try and unpack that “ton of psychological stuff”. I need to be gentle about it, and sometimes even oblique, because that self-judging is still happening and some of the elements may be uncomfortable for other reasons.
But I’m a programmer and I like understanding how systems work. My mind is such a system. It doesn’t open itself up to most of the methods of debugging that computer programming supports, but this is just a challenge it’s not some yucky thing I don’t even want to approach. Taking the original model, and using it to generate some strategy like “stop being a shit and start punishing yourself by doing things you hate so you can get something done with your life, you shit” is squarely in the yucky territory.
I am well aware that any successful strategy is at some point going to involve doing things that are scary, aversive and yucky, and that excessive analysis can be a strategy for avoiding this. But hear this.
One problem I’ve had over large parts of this year is staying too late at work. It hasn’t been essential for the reaching of a deadline, it hasn’t helped advance my career and on the face of it seems pretty dumb. It also doesn’t fit the predictions of “procrastinators are just lazy”.
I brought this issue up at a local rationalist meeting recently, and reflecting on it made me realise that part of the problem is not with the work itself, but the fact that I have nothing to come home to. Sometimes I’ll have some activity scheduled in the evening, but usually I won’t, and I don’t have a girlfriend to come back to or really any idea of what I should be doing with my time when I get back home. This is a big problem for maintaining work/life balance because I can just stay later at later at work and there doesn’t seem any compelling reason to stop that and go home.
So I need things to do.
This is the opposite of the way I’ve been telling myself it should work. A good rationalist – especially an effective altruist – shouldn’t be looking for things to do. There should be such an enormous pile of stuff that needs doing that it’s just a matter of prioritizing. Somehow I need to fit the reality – where there’s no affordance to start working on any particular task – into this picture.
Life is about doing a bunch of things, pretty much none of which are optimal.
This is trivially true – at any point in time we can’t really know what the optimal thing is, and various other factors constrain us into making compromise decisions. But it’s something I haven’t fully taken on board.
Because whatever I’m going to fill my time with when I return from work, it’s not going to be the EA-optimal thing. It may not obviously relate to EA at all. The main criteria for such a task is that it be more productive than sitting there refreshing Facebook, and that it be something I actually expect to do.
The to-do list, as it currently stands, fulfils the value criterion but not the actually-going-to-happen one. And so for this purpose it’s inadequate. Writing things on this blog, however, seems to satisfy both.
Anything which satisfies them even better would of course be a step up, especially as this blog has existed for a few weeks and yet I haven’t felt the urge when I come home to start writing on it. Is it because I haven’t officially sanctioned it somehow? Is it because writing a blog that nobody reads is such a low status thing? May be worth some further thought.
Oh yeah well it’s a secret blog so I can’t tell people I’m doing it, that certainly contributes. So maybe – and I just realised I’m goal factoring here – there’s an additional criterion of “something I can talk about and is socially acceptable”.
This doesn’t seem like it should be difficult to satisfy.
- My tactic of going for long walks after work satisfied all three, but runs into problems now that it’s getting dark earlier.
- Meetups seem like an excellent choice. The effective altruist meetup runs into some can-talk-about-it issues, but if I just talk about the rationalist movement and kind of skim over the fact that we’re actually trying to use it to help people then that’s easier. It seems worth looking into why I don’t more actively explore other meetups though.
- Writing this. Fails “can talk about it”, as discussed earlier.
- Tasks from my to-do list. At least some of them can be made to happen with some pushing, but it’s a rather erratic supply of stuff. I can’t guarantee the list is always going to supply something which I’m in the mood to do and which isn’t blocked by someone else. Recently it’s been my (inaccurate) answer to “what do you plan to do this weekend”, which I kind of wish people would stop asking as if it’s some kind of lite small talk – like haven’t they noticed I’ve answered “nothing” three times in a row? – but I suppose they can’t be blamed.
- Learning Kotlin (a programming language). There’s definitely a sense in which I know more programming languages than I need to, but it still seems to fulfil the three criteria listed here.
- Also programming related, writing some very simple Facebook app which I suppose I shouldn’t describe as it might let you identify me.
So I’m going to go home from a hard day’s programming… to do more programming? Something seems wrong about that. I don’t think it’s “actually going to happen” that’s the problem. While work does drain my enthusiasm for programming, it doesn’t remove it entirely and I could carry on with fun projects in the evening if I felt it was the right thing to do. But it feels like there’s some other objection here. The first objection which comes to mind – and I don’t know yet whether it’s the true objection – is that stacking more programming on top of a very heavily programming-related lifestyle is not going to help round out my personality. There’s something which seems fake about this objection – after all, the motivator behind this was not to round out my personality as such, but to spend less time at work – and there’s a sense in which if I’m doing fun projects instead of work projects in the evening, it’s not like I’m doing any more programming overall.
The second objection is a striving-for-optimality one. Even if learning Kotlin isn’t going to make me more of a nerd than I would have been in the status quo scenario, it doesn’t add anything useful to my CV or skillset. My model of “other people” in this context would be them urging me to take up a totally new interest, like knitting or creative writing.
Knitting totally fails the “can talk about it” test. The obvious question people will ask if you tell them you are knitting is “what are you knitting?”. This surpasses even “what are your plans for the weekend” on the unhelpful small talk scale. People just do not accept an honest answer of “a small and extremely unattractive square that can serve no practical purpose”, and I’m saying this from experience. You can just lie and say you’re knitting a scarf, in the sense that a square could eventually turn into a scarf if you carry on adding to it long enough, but it’s still a lie.
So fuck knitting. Creative writing – which is really just another meetup, since I’m imagining here a particular group where we gather and write things and then read them to each other – seems better. It still doesn’t seem to count as a new interest, since I would know that what I’m writing is drivel and would continue to be so without a more structured lesson environment. It does slightly better on the optimality front, given that a lot of EA career ideas would involve some written communication and that’s something you can build up just by writing lots of things. It’s tenuous, sure, but that’s ok at this point.
But underlying all this is a conflict. The conflict may be between the desire to be normal and fit in, and the desire to accomplish goals in a more systematic way.
“Become a more rounded person”, “take up a new interest”, “be able to come up with stuff when people ask what you’re doing over the weekend” have to do with fitting in. The cherry at the top of this big mound of signalling is the VOLUNTEER, who hangs around an organization with some worthy purpose and does useful things. I guess EA organizations sometimes use volunteers, but they’re likely to ask annoying questions like “do you know what you’re doing or are you just going to get in the way?” The rest of the world is a bit more lax about these things, and generally if you tell people you’re volunteering somewhere they would be impressed by that, as far as I know. But volunteering for a regular organization is like crapping on your own identity as an EA. See something the org is doing which seems inefficient or doesn’t make sense? Better stay quiet about that. (This may be wrong, by the way – I’m not saying it from experience).
So there seems to be a sense in which trying to fit in leads in a direction I don’t want to be led in, even if it’s hard to articulate the details. But there’s also a sense in which it’s really attractive. Being a shallow person is something I judge myself about a lot, and neither Less Wrong nor effective altruism has changed that. A lot of this path of fitting in is about giving off visible signals of depth. I’m not just a programmer – see – I also knit!
If it was just about impressing other people then it might be easy to shake off. But it isn’t. There really is a sense in which the other people seem to be right about this one, that focussing too narrowly on one thing can be stressful and so on.
The other side of this battle – the optimizer – comes with its own problems. Whatever seems like optimal path is going to tend to be one that I’m not going to actually follow. If you were to create an optimal path just for me it would probably start off in a rather roundabout fashion as I tackled all the issues that were blocking me from doing useful stuff, and then I could actually go ahead and start doing the useful stuff. At least that’s how I imagine.
So I need to tell myself things like “I know it seems like doing this thing isn’t going to help and the exact same behaviour could be explained away with some bullshit motivation, but I am in fact doing it because I think it’s the right thing to do it’s just the reason is somewhat complex” and be right about that and also believe it at the same time.
That seems like a big ask for the optimizer.
The fitter-in does not have that problem, but there’s still a conflict when the actions suggested by the fitter-in actually agree with the actions suggested by the roundabout-optimizer. The optimizer is really scared of the fitter-in winning, that I might end up losing any effective altruist motivations and just be sucked into doing what the most socially convenient thing is. At the same time, writing that I’m not sure I really believe it – whatever caused me to be interested in EA has survived a variety of insults already and still seems to be intact.
It’s like when I hear about goal factoring and people say, don’t worry you can goal factor as deep as you like you won’t ever evaporate your desire to help people by discovering you’re really motivated by purely selfish concerns. You actually do want to help people otherwise you wouldn’t be worried about that happening in the first place. As far as I know this seems to be true empirically but deep down I don’t believe it.
And so the chart I posted above feels slightly fake. It seems to be glossing over these conflicting clusters of motivations – which can probably be aligned with each other but it’ll take a lot more than three boxes.