Let’s take a look at the being late for work thing.
Revealed preference would say that I want to be late to work for some reason, since it consistently happens and I’m broadly in control of the relevant factors. I’ll stick with the single-agent revealed preference model for now: there’s the idea that there’s another agent that speaks in words and is generally weaker but able to make some clever interventions, but that doesn’t come up here.
It’s still not clear why I’d want to be late for work though. Let’s explore.
The mechanism for being late is well understood. At the core of it is something I would call snooze chaining: each time my alarm goes off, I decide that I don’t want to deal with whatever the day has to offer just yet and press the snooze button. The first few times this will happen with near certainty. After a while the urgency of having to get to work will dominate and I will actually get up, but usually not with quite enough time to spare.
The obvious explanation for this behaviour would be that I don’t know how long exactly it takes to get up and go to work. But I don’t think this is true at all. It’s happened enough times that I know.
The next explanation is that first thing in the morning I’m feeling groggy and not making decisions correctly.
Taking that one at face value, there’s a strategy for dealing with it. I talked about trigger action plans before so maybe I mentioned it? It’s to “practice” the initial portion of your morning routine as soon as your alarm goes off, but not actually do the rehearsal during the morning but at some other time. When the real morning rolls around you might feel some slight resistance but you also feel yourself starting to spring out of bed just because that’s what you trained yourself to do.
I tried it before and it seemed to work for a while.
Three things to note about this though. First is that it’s a “hack” not “engineering” – this kind of approach doesn’t generalize very well, and whatever was making me so conflicted about starting the day is still presumably in full effect.
Second, and I guess third also because these are closely related, is that there are things I could be doing to address this problem that I’m not. The second thing is to do the rehearsal ritual as I described. The third thing is to deal with bedtime better, since staying up too late was the primary reason this system broke down when I tried implementing it before.
So this isn’t just a problem with groggy-me, it also concerns rest-of-the-time-me not implementing strategies which might help. Implementing the snooze alarm training is something I could just do, again, but this is definitely “hacking” not “engineering” and the previous result suggests it might not work long term unless I at least fix the other thing that causes me to stay late.
It’s possible of course that my late-night behaviour is also an artifact of groggy me and that I can similarly train myself to respond to a go-to-bed alarm as I do to a getting-up alarm, but I did warn you that I didn’t think this particular technique generalizes and it at least subjectively feels like my decisionmaking late at night is similar to the rest of the time.
So I’ve discussed:
- what happened
- a possible fix or non-fix for it
But I still haven’t gone into why. It’s not really apparent that I want to be late to work. It’s not like wanting ice cream where you reveal a preference for it just because it’s enjoyable. Turning up late and making everybody grumpy doesn’t feel good at all.
Let’s think of another model. What I need to explain is why I usually end up only slightly late.
Suppose that utility as a function of lateness looks like this:
For negative t, the function is some constant. For 0 < t < k it plateaus on some lower constant. For t>k it declines fairly rapidly.
The heuristic behind this is that if I’m on time or early, everything is fine. As soon as I’m slightly late then people get irritated. Once I’m more than a certain amount late then it becomes ridiculous and I have to warn people in advance over Slack and so on that it’s going to happen.
The hypothesis then is that the step that I’m optimizing to get in ahead of is not the t=0 step but the t=k step – I’m optimizing to avoid being ridiculously late and not to avoid being late. In a sense I am leaving some utility on the table, but it’s only utility according to one part of my mind. According to the part which is in control in this situation – which in the revealed preferences model would correspond to my actual desires – this extra bonus utility is pretty small, i.e. irritated people are more or less equivalent to happy people. But all parts of the decisionmaking agree that being seriously late will have consequences and needs to be avoided if possible.
Here there are two graphs with separate units on the y axis, though I’ve shown them closely tracking each other once t gets after a certain point. The “revealed utility” (in the sense of revealed preference” is the one that I’ll make my decisions around. This is completely flat around t=0, it really doesn’t care. The “how good it feels” is just that, and is what I feel I ought to be making decisions around, since I don’t want to reliably do things that make me miserable.
There’s a gap between them that needs explaining.
In a sense both graphs are about managing shame. The bad feeling I get when I’m late – and the worse feeling I get when I’m really late – is shame-like. My hypothesis then is that I have two shame pathways: “shame-1” contributes to the red graph, and “shame-2” explains the discrepancy between the graphs.
(Remember that shame drags things downwards in this picture because upwards means yay. How one graph is positioned vertically relative to the other is arbitrary).
What would shame-1 and shame-2 be about?
My guess is that shame-1 is the stuff that has consequences. If I’m ridiculously late too many times, people will start to take action around that in a way that isn’t good for me.
A good example of how shame-1 works is breaking builds on the evening before demo day. If I commit some dodgy code on Wednesday afternoon and it breaks the build, then no-one’s allowed to demo their work the next day, everyone notices and consequences are pretty real. I won’t say that’s never happened, but I will say that I take reasonable precautions to make sure that it doesn’t. If I ever do break things, at a critical time or otherwise, then I learn from that and I’m less likely to make that kind of mistake in the future. I don’t have to apply some additional layer of making myself not want to do that thing or anything crazy like that.
Shame-2 can keep on happening and I won’t learn.
I’m guessing that our brains have got pretty good at calculating what we can get away with in terms of social misdemeanours, and that this is encoded as shame-1. I won’t say I’m good at social skills at all, that isn’t actually necessary for this model. It’s more important for the model to work that shame-1 is well-calibrated than that it’s clued into all the details.
Anything left over would be shame-2, and if shame-1 is unbiased then shame-2 does not relate to social consequences in any obvious way. The question then is why I would feel anything at all – it seems to be something about my self-image?
They do actually feel sort of different. Shame-2 is sort of “I hate myself”. Shame-1 is more “oh crap” – there’s a feeling of having made an actual mistake, as opposed to an ongoing dismalness.
The question then is, what else would this model predict? What else happens that seems shame-related that it doesn’t predict correctly, that would suggest it needs to be thrown out or that it can be refined to be more useful?
It doesn’t immediately suggest any solutions to things. Anything with a lot of shame-2 attached will be a problem because I won’t learn from my negative experiences and will carry on doing the thing that makes me feel bad. Better models will be needed to debug that properly. But it at least feels like I’ve figured something out that I hadn’t before.