Tutorial 12b: Too far may not be far enough
February 17, 2011
(This is sort of a riff on the recent post, Tutorial 12: How to find problems to work on, which you might want to read first if you haven’t already.)
Something that has been much on my mind lately is the idea that if you don’t go too far, you don’t know how far you should have gone.
I first encountered this idea in a quote from concept artist Ian McCraig, in The Art of Star Wars: Episode I (p. 195):
People ask when I know when to stop scribbling, and decide a work is finished. I say you have to go too far and destroy it, because then you know when you should have stopped and can go back. If you don’t, you leave untold riches out there.
I’m sure McCraig wasn’t the first to formulate the idea, he’s just the medium through which I first learned of it, back in 1999.
Scott Aaronson calls brief statements of this sort “Umeshisms“, after his advisor, Umesh Varizani, who said,
If you’ve never missed a flight, you’re spending too much time in airports.
A follow-up post, reporting the results of an Umeshism contest among his readers, also has some gems.
For the sake of completeness, I should note the very economical general formulation of this idea from Tagore Smith, in a comment on Mike’s blog that Mike later promoted into a stand-alone post (the comment, not this excerpt):
I haven’t gone too far yet so I am not sure if I have gone far enough.
Here’s the larger lesson Aaronson drew from his advisor’s airport quip:
In a single sentence, Umesh was communicating an entire philosophy of life: concentrate on the high-order bits. The squash player who runs back and forth to attempt every shot, the student who’s never late with an assignment, the researcher who stalks an unimportant problem like Captain Ahab: all have succumbed to the tyranny of the low-order bit. They need to realize that, as in a randomized algorithm, occasional failures are the inevitable byproduct of a successful strategy. If you always win, then you’re probably doing something wrong.
One of the reasons this is so much on my mind is that I did an editing pass on a manuscript Mike is working on, and he took some of my suggestions, but not all of them. And I realized that that is probably a good thing; if he’d taken all of my suggestions, it would mean that I not edited hard enough. And it occurred to me that the Umeshism philosophy can probably be more effectively implemented by two people than by one. One can’t be an iconoclast all the time and still be productive; you have to settle down sometime. Also, two sets of eyes are going to see more ways to push the edge of the envelope.
“The researcher who stalks an unimportant problem like Captain Ahab” is also worth thinking about–specifically, to wonder which among my many concurrently developing projects are high-order bits, and which are not. Mike and I refer to our lists of works-in-progress as POOP, or Prioritized Ordering Of Projects, but we (or at least I) tend to slip into using “priority” to mean, “what am I working on next”, and not, “what should I be working on next”. I have let many projects slip into limbo while pursuing others, and it would be worthwhile to periodically reassess whether I’ve let the right ones slip. I strongly suspect that it has not always been the case. I just wrote in Tutorial 12 that any productive researcher is going to die with a mountain of intended work left undone. It is probably not too early for any of us to look at our array of projects and ask, “Which among these most needs rescuing from that mountain?”
The long Aaronson quote above also raises the specter of the costs of catching mistakes, which Paul Graham and Mike have both written essays about. Both basically boil down to “safety is expensive”. And that is sort of what I meant in Tutorial 12 when I wrote, “If you’re not feeling stupid, you’re too comfortable, and it might be time to do an audit and see if you’re actually contributing to science at all.” Feeling stupid–in the scientifically productive way–is a symptom of being out of your safety zone, where you are more likely to learn valuable things and have new ideas. I also argued that you have to sift through a lot of facts and ideas to hit on the handful that might meaningfully become part of your research. Most of the stuff you encounter will not be relevant to whatever it is you’re trying to do, but that’s okay. If all of your ideas seem like good ones, either you’re playing it very safe (and therefore the ideas aren’t actually that good), or you’re having delusions of grandeur.
I remember seeing somewhere–irritatingly, at this point I have no idea where–some guy arguing, maybe half-seriously, that any project was plagued both by errors that one knows about and also by other errors or biases of which we are ignorant, and that therefore he always tried to make sure that his known errors were bigger, because that way he was in control (the original was more cleverly and economically phrased).
All of this interests me, because so many forces in our lives conspire to make us afraid of making mistakes, and often even more afraid of admitting to them once they’ve been made. But we all make mistakes, all the time. So what are we going to do about it?
Right now my Gmail sig quote is a line from Clay Shirky:
To put yourself forward as someone good enough to do interesting things is, by definition, to expose yourself to all kinds of negative judgments, and as far as I can tell, the fact that other people get to decide what they think of your behavior leaves only two strategies for not suffering from those judgments: not doing anything, or not caring about the reaction.
The hardest reaction to not care about is your own. Doing good work demands the capacity to take mistakes in stride and keep moving forward. Doing great work might require another level of perspective, in which some kinds of mistakes are just indicators that you’re on the right path.