35 stories
·
0 followers

One of these days I’m going to figure this out

1 Share

If something is outside your grasp, it’s hard to know just how far outside it is.

Many times I’ve intended to sit down and understand something thoroughly, and I’ve put it off for years. Maybe it’s a programming language that I just use a few features of, or a book I keep seeing references to. Maybe it’s a theorem that keeps coming up in applications. It’s something I understand enough to get by, but I feel like I’m missing something.

I’ll eventually block off some time to dive into whatever it is, to get to the bottom of things. Then in a fraction of the time I’ve allocated, I do get to the bottom and find out that I wasn’t that far away. It feels like swimming in a water that’s just over your head. Your feet don’t touch bottom, and you don’t try to touch bottom because you don’t know how far away bottom is, but it was only inches away.

A few years ago I wrote about John Conway’s experience along these lines. He made a schedule for the time he’d spend each week working on an open problem in group theory, and then he solved it the first day. More on his story here. I suspect that having allocated a large amount of time to the problem put him in a mindset where he didn’t need a large amount of time.

I’ve written about this before in the context of simplicity and stress reduction: a little simplicity goes a long way. Making something just a little bit simpler can make an enormous difference. Maybe you only reduce the objective complexity by 10%, but you feel like you’ve reduced it by 50%. Just as you can’t tell how far away you are from understanding something when you’re almost there, you also can’t tell how complicated something really is when you’re overwhelmed. If you can simplify things enough to go from being overwhelmed to not being overwhelmed, that makes all the difference.

Read the whole story
superlopuh
56 days ago
reply
Share this story
Delete

A Day in the Life at a Bell Labs Datacenter in the Late 60s

2 Comments and 5 Shares

Larry Luckham was a manager at a Bell Labs data center in Oakland in the late 60s and early 70s. One day, he captured daily life at the company with his camera.

Bell Labs, 69-70

Bell Labs, 69-70

Bell Labs, 69-70

Note how many of his coworkers were women, including women of color. From The Secret History of Women in Coding:

A good programmer was concise and elegant and never wasted a word. They were poets of bits. “It was like working logic puzzles — big, complicated logic puzzles,” Wilkes says. “I still have a very picky, precise mind, to a fault. I notice pictures that are crooked on the wall.”

What sort of person possesses that kind of mentality? Back then, it was assumed to be women. They had already played a foundational role in the prehistory of computing: During World War II, women operated some of the first computational machines used for code-breaking at Bletchley Park in Britain. In the United States, by 1960, according to government statistics, more than one in four programmers were women. At M.I.T.’s Lincoln Labs in the 1960s, where Wilkes worked, she recalls that most of those the government categorized as “career programmers” were female. It wasn’t high-status work — yet.

Tags: Larry Luckham   photography
Read the whole story
superlopuh
248 days ago
reply
Share this story
Delete
2 public comments
cjheinz
251 days ago
reply
I worked in computing from the mid-70s until retirement in 2012. Particularly in the 80s, seemed like there were a lot of talented women working in computing. 10-20 years later, not so much :-(
ScottInPDX
251 days ago
reply
High tech, pre-bro culture. Amazing how hard we have to work now to achieve the same level of diversity...
Portland, Oregon, USA, Earth

Saturday Morning Breakfast Cereal - Dogs

6 Shares


Click here to go see the bonus panel!

Hovertext:
I am prepared to be the first investor in a form of social media where you're only allowed to make dog noises at each other.


Today's News:
Read the whole story
superlopuh
248 days ago
reply
Share this story
Delete

Putting Mental Models to Practice Part 6: A Personal Epistemology of Practice

1 Share
Putting Mental Models to Practice Part 6: A Personal Epistemology of Practice

This is the final part of a series of posts on putting mental models to practice. In Part 1 I described my problems with Munger’s prescription to ‘build a latticework of mental models’ in service of better decision making, and then dove into the decision making literature from Parts 2 to 4.

We learnt that there is a difference between judgment and decision making, and that the field of decision making itself is split into classical decision theory and naturalistic methods. Both approaches fundamentally agree that decision-making is a form of ‘search’; however, classical approaches emphasise utility optimisation and rational analysis, whereas naturalistic approaches think satisficing is enough and basically just tell you to go get expertise. In Part 5, we looked at some approaches for building expertise, in particular eliciting the tacit mental models from expert practitioners around you.

Part 6 brings this series to a close with some personal reflections on the epistemology of practice. I promised to write a treatment in the first part of this series, and so here we are. This begins with a simple question, and ends with a fairly complicated answer. And so here we go.

When someone gives you advice, how do you evaluate that advice before putting the ideas to practice?

It must be obvious to us that some process exists in your head — you don’t accept all the advice you’re given, after all. If your friend Bob tells you to do something for your persistent cough, you’re more likely to listen to him if he were an accomplished doctor than if he were a car mechanic.

Choosing between a doctor and a mechanic is easy, however. In reality, we’re frequently called to evaluate claims to truth from various sources — some of them from well-meaning people, others from consultants we’re willing to pay, and still others from third-hand accounts, or from books written by practitioners wanting to leave some of their knowledge behind.

Could we come up with better rules for when we should listen to advice, and when we should not? The default for many people, after all, is to just go “I trust my gut that she’s correct.” A slightly better answer would be to say “just look at what the science says!”, or “look at the weight of evidence!” but this throws you into a thorny pit of other problems, problems like ‘are you willing to do the work to evaluate a whole body of academic literature?’ and ‘how likely is it that the scientists involved with this subfield were p-hacking their way to tenure with fake results?’ and ‘is this area of science affected by the replication crisis?’ or ‘are contemporary research methods able to measure the effects I’m interested in applying to my life, without confounding variables?’ and ‘does science have anything to say at all with regard to this area of self-improvement?’

These are not easy questions to answer, as we’ll see in a bit. But it does seem like it is worth it to sit down and think through the question; the default of ‘I’ll go with my gut’ doesn’t appear to be very good.

Why epistemology?

First things first, though. I think it’s reasonable to ask what an essay on epistemology is doing in a series about putting mental models to practice. We’ve spent parts 1 to 5 in the pursuit of practical prescriptive models, so why the sudden switch to epistemology — something so theoretical and abstract?

There are two reasons I'm including this piece in the series. First, coming up with a standard for truth keeps me intellectually honest. I wrote in Part 1 that I would provide an epistemic basis for this framework by the time we were done, so that you may nail me against the ideas that I have presented here. This is that essay, as promised.

Second, an epistemic basis is important because it separates this framework from mere argumentation. What do I mean by this? Well, just because something sounds convincing doesn’t make it true. As with most things, what makes something true is not the persuasiveness of the argument, or even the plausibility of it, but whether that thing maps to reality. In our context — the context of a framework of practice — what makes this framework true is if it works for you.

This seems a little trite to say, but I think it underpins a pretty important idea. The idea is that rhetoric is powerful, so you can't simply trust the things that you read.

I wonder if many of you have had the experience where you read some non-fiction book and came away absolutely convinced of the argument presented within, only to read some critique of the book years later and come to believe that that critique was absolutely true.

Or perhaps you've read some book and are absolutely convinced, but then realise, months later, that there was a ridiculously big hole in the author’s argument, and good god why hadn't you seen that earlier? I'm not sure if this happens to you, but it happens to me all the time, and it's frequent enough that I'm beginning to think that I'm not good enough to be appropriately critical while reading.

My point is that sufficiently smart authors with sufficiently good rhetorical skills can be pretty damned convincing, and it pays to have some explicitly thought-out epistemology as a mental defence.

You can perhaps see the sort of direction I’m taking already — I’ve spent a lot of time in this series saying variants of ‘let reality be the teacher’. The naive interpretation of this doesn’t work for all scenarios — for instance, it’s unclear that ‘let reality be the teacher’ would work when it comes to matters of economic policy, despite claims to the contrary (you can’t personally test the claims of each faction, and though all sides cite research papers and statistics to back up their positions, you should realise that divining the truth from a set of studies is a lot more difficult than you think.)

But ‘let reality be the teacher’ works pretty well when it comes to practical matters — like self-improvement, say, or when you’re trying to evaluate a framework for putting mental models to practice. It works because you have a much simpler test available to you: you try it to see if it works.

The Epistemology of Scientific Knowledge

Before we dive into the details of putting practical advice to the test, let’s take a look at the ‘gold standard’ for knowledge in our world. For many people the gold standard is scientific knowledge: that is, the things we know about reality that’s been tested through the scientific method. The philosophy of science is the branch of academia most concerned with the nature of truth in the scientific method.

(“Oh no,” I hear you think, “He’s going to talk about philosophy — this isn’t going to be very useful, is it?”)

Well, I’ll give you the really short version. There are two big ideas in scientific epistemology that I think are useful to us.

The first idea is about falsification. You’ve probably heard of the story of the black swan: for the longest time, people thought that all swans were white. Then one day Willem de Vlamingh and his expedition went to Australia, found black swans on the shore of the Swan River, and suddenly people realised that this wasn’t true at all.

The philosopher David Hume thus observed: “No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.” Great guy, this Hume, who then went on to say that we could never really identify causal relationships in reality, and so what was the point of science anyway?

But what Hume was getting at with that quote was the asymmetry between confirmation and disconfirmation. No amount of confirmation may allow you to conclude that your hypothesis is correct, but a single disconfirmation can disprove your hypothesis with remarkable ease. Therefore, the scientific tradition after Hume, Kant, and Popper has made disconfirmation its core focus; the thinking now goes that you may only attempt to falsify hypotheses, never to prove them.

Of course, in practice we do act as if we have proven certain things. With every failed falsification, certain results become more ‘true’ over time. This is what we mean when we say that all truths in the scientific tradition are ‘conditionally true’ — that is, we regard things as true until we find some remarkable, disconfirming evidence that appears to disprove it. In the meantime, the idea is that scientists get to try their darnest to disprove things.

Let’s take a moment here to restate the preceding paragraph in terms of probabilities. Saying that something is ‘conditionally true’ is to say that we can never be 100% sure of something. Instead, we attempt to disprove our hypotheses, and with each failed disconfirmation, we take our belief in the hypothesis — perhaps we start with 0.6 — and increase it in increments. Eventually, after failing to find disconfirming evidence repeatedly, we say “all swans are either black or white” but our confidence in the statement never hits 1; it hovers, perhaps, around 0.95. If we find a group of orange swans, that belief drops to near 0.

(Hah! I’m kidding, that’s not what happens at all; what happens is that scientists who have invested their entire careers in black and white swans would take to the opinion pages of Nature and argue that the orange birds are not swans and can you believe the young whippersnappers who published this paper!? Science is neat only in theory, rarely in practice).

This incomplete sketch of scientific epistemology is sufficient to understand the second idea: that is, the notion that not all scientific studies are created equal. Even a naive understanding of falsifiability will tell us that no single study is indicative of truth; it is only the broad trend, taken over many studies, that can tell us if some given hypothesis is ‘true’.

This notion is most commonly called ‘the hierarchy of evidence’, and it is usually presented in the following form:

Putting Mental Models to Practice Part 6: A Personal Epistemology of Practice

The pyramid of evidence that you see above is most commonly taught to healthcare professionals, and it shows us the various types of studies we may use when we need to make a judgment on some statement — for instance, “does smoking cause lung cancer?” or “is wearing sunscreen bad?” At the bottom of the pyramid are things like editorials and expert opinion, followed by mechanistic studies (efforts to uncover a mechanism), then case reports and case studies (basically anecdotes), then cross-sectional studies and surveys (“are there patterns in this population?”) then case-control studies (“let’s take two populations and examine patterns retrospectively”), then cohort studies (“let’s take two populations that differ in some potential cause and observe them going forward to see if bad things happen”), then randomised controlled trials (“let’s have a control group and an intervention group and see what happens after we intervene”) and then, finally, at the tippy-top, we have systematic reviews and meta-analyses, which studies the results of many studies.

So: ‘is sunscreen harmful?’ If you’re lazy, the answer is to dive into the academic literature for a systematic review, or better yet: a meta-analysis. Meta-analyses in particular are the gold standard for truth in science; these studies are essentially a study of studies — they summarise the results from a broad selection of papers, and weight the evidence by their statistical power. It is possible to do a bad meta-analysis, of course, for the same reasons that null-hypothesis statistical tests may be misused to prop up crap research. But by-and-large, the scientific system is the best method we have for finding the truth.

It’s unfortunate, then, that large bits of it aren’t terribly useful for personal practice.

The Problems with Scientific Research as Applied to Practice

Let’s say that you’re trying to create a hiring program, and you decide to look into the academic literature for pointers. One of the most well replicated results in psychology is the notion that conscientiousness and IQ are good predictors of job performance. So, the answer to your hiring problems is to create an interview program that filters for high conscientiousness and high IQ, right?

Well … no.

This is a pretty terrible idea — and I should know: thanks to my lack of statistical sophistication, I’ve tried. The objections to this are two-fold. First, statistical predictors don’t predict well for a given individual. Second, science often cannot lead us to the best practices for our specific situation, because it is only concerned with effect sizes that are large enough to be detected in a sizeable population.

These two objections are specific to the scenario of hiring, but can also be made more generally when you are applying scientific research to your life. I’ve found that they share certain similarities with the challenge of evaluating expert advice … so let’s examine them in order.

The first problem with applying scientific research to practice is the nature of statistical predictors. Let's talk about IQ. We know that IQ correlates with job performance in a band between 0.45 and 0.58, with the effect being stronger in jobs with higher complexity. And this is IQ we’re talking about, one of the strongest results we can find in all of applied psychology; literally thousands of studies have been done on IQ over the past four decades, with dozens of meta-analyses to pick from.

Can we trust the correlations? Yes. Can we use them to predict individual job performance given an IQ score? No.

Why is this the case? In Against Individual IQ Worries, Scott Alexander explains with a comparison to income inequality:

Consider something like income inequality: kids from rich families are at an advantage in life; kids from poor families are at a disadvantage.

From a research point of view, it’s really important to understand this is true. A scientific establishment in denial that having wealthy parents gave you a leg up in life would be an intellectual disgrace. Knowing that wealth runs in families is vital for even a minimal understanding of society, and anybody forced to deny that for political reasons would end up so hopelessly confused that they might as well just give up on having a coherent world-view.

From a personal point of view, coming from a poor family probably isn’t great but shouldn’t be infinitely discouraging. It doesn’t suggest that some kid should think to herself “I come from a family that only makes $30,000 per year, guess that means I’m doomed to be a failure forever, might as well not even try”. A poor kid is certainly at a disadvantage relative to a rich kid, but probably she knew that already long before any scientist came around to tell her. If she took the scientific study of intergenerational income transmission as something more official and final than her general sense that life was hard – if she obsessively recorded every raise and bonus her parents got on the grounds that it determined her own hope for the future – she would be giving the science more weight than it deserves.

And this is actually a really pragmatic thing to do.

I have a friend in AI research who reacts to IQ studies in exactly this manner; he recognises, intellectually, that IQ is a real thing with real consequences, but he rejects all such studies at a personal level. When it comes to his research work, he assumes that everyone is equally smart and that scientific insight is developed through hard work and skill. And there are all sorts of practical benefits to this: I believe this framing protects him from crippling self-doubt, and it has the added benefit of denying him trite explanations like “oh, of course Frank managed to write that paper, he’s smarter than me.”

The point here is that for many types of questions, science is often interested in what is true in the large — what is true at the population level, for instance — as opposed to what works for the individual. It is pragmatic and totally correct to just accept these studies as true of societal level realities, but toss them out during day-to-day personal development. My way of remembering this is to say “scientists are interested in truth, but practitioners are interested in what is useful.”

A more concrete interpretation is that you should expect to find all sorts of incredibly high performing individuals with lower than average IQs and low conscientiousness scores; the statistics tell us these people are rarer, but then so what? A correlation of 0.58 only explains 34% of the variance. If your hiring program is set up for IQ tests and Big Five personality traits, how many potentially good performers are you leaving on the table? And while we’re on this topic, is your hiring program supposed to look for high scorers in IQ and conscientiousness tests, or is it supposed to be looking for, I don’t know, actual high performers?

This leads us to our second objection. Because scientific research is interested in what is generally true, most studies don't have the sorts of prescriptions that are specifically useful to you in your field. Or, to apply this to our hiring problem, it’s very likely that thoughtful experimentation would lead you to far better tests than anything you can possibly find in the academic literature.

Here’s an example. One of the things I did when designing my company’s hiring program was to use debugging ability as a first-pass filter for candidates in our hiring pipeline. This arose out of the observation that a programmer with good debugging skills might not be a great programmer, but a programmer with bad debugging skills could never be a good one. This seems obvious when stated retrospectively, but it took us a good amount of time to figure this out, and longer still to realise that a debugging skill assessment would be incredibly useful as a filtering test for candidates at the top of our funnel.

Is our debugging test more indicative of job performance in our specific company than an IQ test or a conscientiousness test? Yes. Is it a proxy for IQ? Maybe! Would we have found it in the academic literature? No.

Why is this the case? You would think that science — of all the forms of knowledge we have available to us — would have the best answers. But it frequently doesn’t, for two reasons. First, as previously mentioned, scientific research often focuses on what is generally true, not on what is prescriptively useful. I say ‘often’ here because the incentives have to align for there to be significant scientific attention on things that you can use — drug research, for instance, or sports science, for another. In such cases, science gives us a wonderful window into usable truths about our world.

But if the stars don’t align — if there is a dearth of attention or if there are a lack of financial incentives for certain types of research questions, you should expect to see an equivalent void of usable scientific research on the issue. My field of software engineering is one such field: the majority of our software engineering ‘best practices’ are constructed from anecdotes and shared experiences from expert programmers, who collectively contribute their opinions on ‘the right way to do things’ … through books, blog posts, and conference talks. (Exercise for the alert reader: what level of the hierarchy of evidence do we software engineers live in?)

The hiring example also illustrates this problem perfectly. When it comes to hiring, all we have are statistical predictors: that is, correlations between a measurement of some trait on the one hand (IQ, conscientiousness, grit) and some proxy for job performance (salary, levelling, peer assessments) on the other. As we’ve seen previously, statistical predictors are good for research purposes but are not very useful at the individual level; what we really want is some kind of intervention study, where a process is developed and then implemented in both an intervention group and a control.

This isn’t the least of it, though. The second reason science is often not useful to the practitioner is that even when there are financial incentives to perform ‘instrumentally useful’ research, there may still be a void of usable recommendations, for the simple reason that science moves relatively slowly.

Here’s an example: I’ve long been interested in burnout prevention, due to a personal tendency to work long hours. Late last year I decided to dive into the academic literature on burnout. Imagine my surprise when I discovered that the literature was only two decades old — and that things were looking up for the field; Buchanan & Considine observed in 2002 that half of Australian nurses leave the profession prematurely, the majority of them due to burnout. In other words, the urgency of finding a solution to burnout is now really, really high, and medical institutions with deep pockets are beginning to push researchers forward. This is the sort of attention and financial incentives that lead to instrumentally useful scientific research — the sorts that you and I can apply directly to our lives.

So, what have they found? After two decades of study, they have developed a test for detecting burnout — the Maslach Burnout Inventory — and they have two developmental models for how burnout emerges and progresses in individuals. (You can read all about it in this ‘state of the field’ summary paper that Maslach published in 2016). What they haven’t discovered is a rigorous system to prevent burnout.

(Some people believe that inverting the developmental models gives us generally useful prescriptions for our workplaces. This is because the developmental models tell us how burnout progresses, which should then give us clues as to arresting that progression. But! Remember the caution of ‘generally useful’ interventions from our earlier discussion, and keep in mind where this lies in the hierarchy of evidence).

What interests me the most, however, is the small branch of the research that focuses on burnout resistance training — that is, the idea that individuals who experience burnout and recover from it develop better resilience to burnout later. I have high hopes for this branch to develop into something useful, but the only way to know is to give it another decade or so. Such is the price for truth.

The point here is that going for thoughtful trial and error in fields where scientific knowledge exists isn't a ridiculous position to take. It's trite to say “oh, I don't see why you can't take what psychologists have figured out and apply it to your practice” — but the answer is now easier to understand: science is interested in what is generally true, and it often doesn't give good prescriptions compared to what you (or others) can derive from observation and experimentation; worse still, if you start out with a model of reality drawn from some science and are convinced that the model is more valid for your unique situation than it actually is, you're likely to hold on to that model for longer than it is useful.

Of course, this is just a really complicated way of saying that some forms of knowledge are best learnt from practitioners — if you want to learn a martial art, you go to a sensei; when you want to learn cooking, you learn from cooks, not food scientists. Don’t start out with epistêmê when it’s technê that you’re interested in learning.

Of course, I don’t mean to say that scientific knowledge isn’t useful for personal practice; I think it’s pretty clear that instrumentally useful research exists, and wherever available we should defer to those results. But I am saying that understanding the nature of truth in scientific knowledge matters, and it doesn’t absolve us of the requirement to test things in our realities. If a doctor prescribes Adderall to you and you find that it puts you to sleep — this doesn’t mean that Adderall is useless or that amphetamines should be reclassified as sleep aids. In fact, you shouldn’t even be that surprised; statistical truth tells us that most people would feel stimulated on Adderall; lived experience reminds us that individual variation is a thing. Even scientifically proven interventions fail to work for some people some of the time.

Evaluating Anecdata

I’d say that the major takeaway at this point is that you never know if something might work until you let reality be the teacher and try it out for yourself. But it’s worth asking, though: if individual variation is this much of an issue when it comes to applying scientific research, how much worse can it get when we’re dealing with expert opinion and anecdotal evidence?

The question is worth asking because most spheres in life require us to work from evidence that’s nowhere near the rigour of science. Consider my example of hiring, above: in the absence of solid science, one obvious move that I can take is to talk to those with experience in the tech industry, in order to pick their brains for possible techniques to use for myself. As it is for hiring, so it is for learning to manage difficult subordinates, for learning to start and run a company, and for learning to use martial arts in real world situations. In situations where you are interested in the how to of things, you often have only two options: first, you can go off and experiment by yourself, or second, you can ask a practitioner for their advice.

So how do you evaluate the advice you’re given? How do you know who to take seriously, and who to discount?

I’d like to propose an alternative hierarchy of evidence for practitioners. When asking for advice, judge the advice according to the following pyramid:

Putting Mental Models to Practice Part 6: A Personal Epistemology of Practice

At the very top of the pyramid is advice that you’ve tested in your life. As I’ve repeatedly argued during this series: “let reality be the teacher!” You only truly know that a piece of advice works if you have tested it; the same way that a doctor will only truly know if a drug works after a patient has started her course of treatment. Before actual action, all one can have is a confidence judgment in how likely the intervention would work.

The second level of the pyramid is advice from practitioners who are believable and who are exposed to negative outcomes. Believability is a technique that I’ve already covered in a previous part in this series: originally proposed by hedge fund manager Ray Dalio in his book Principles, believability is a method for evaluating expertise.

The idea goes as follows — when asking people for advice, apply a suitable weight to their recommendations:

  1. The person must have had at least three successes. This reduces the probability that they are a fluke.
  2. They must have a credible explanation for their approach when probed. This increases the probability that you'll get useful information out of them.

If an expert passes these two requirements, you may consider them ‘believable’. Dalio then suggests a communication protocol built on top of believability: if you’re talking to someone with higher believability, shut up and ask questions; if you’re talking to someone with equal believability, you are allowed to debate; if you’re talking to someone with lower believability, spend the least amount of time hearing them out, on the off chance they have an objection you haven’t considered before; otherwise, just discount their opinions.

The second requirement is that an expert is more believable if they are exposed to negative outcomes in their domain. This idea is one that I’ve stolen from Nicholas Nassem Taleb’s Skin in the Game — which essentially argues that experts who are exposed to downside risk are more prudent in their judgments and decisions compared to experts who aren’t.

So, for instance, Singaporean diplomats are nearly all ‘realists’ because they can’t afford to get things wrong in their assessments of the world (the city-state is pretty much screwed if they ever piss off a more powerful neighbour … and all their neighbours are more powerful than they are); in America, officials in the state department can afford to hold more ideologically-motivated ideas around foreign policy. This isn’t a personal observation, mind — I have to admit that I am quite influenced by my friends in Singapore’s Ministry of Foreign Affairs; nevertheless, I have always found the Singaporean perspective of world affairs to be more incisive than that of many other countries.

I’ll leave the full argument for this idea to Taleb, but I will say that the rule seems to hold up in my experience. By total coincidence, in the opening of Principles, Dalio tells the story of being shopped around to see the various central banks in the aftermath of the 2008 financial crisis. Dalio’s fund had developed models that predicted the financial crisis; the economists who ran the central banks did not. An armchair critic projecting Taleb’s pugnacious personality might argue that Dalio was exposed to downside risk, whereas the average central banker wasn’t. Whether this principle holds true at a universal level is an exercise for the alert reader.

The third level, below advice from people who are both believable and exposed to downside risk, are people who are ‘merely’ believable. Expert opinion is still better than non-expert opinion, and we should seek it out whenever possible. I think this is as good a time as any to discuss one of the biggest objections people have with Dalio’s believability.

When I tell people about Dalio’s believability metric, they often bristle at the idea that one should discount the opinions of lower-believability people. “Isn’t that just ad-hominem?” they say. “An opinion or argument should be evaluated by its own merits, not on how credible the person making it is.”

This is, I think, the most counter-intuitive implication of Dalio’s believability rule. Traditionally, we are taught that good debate should not take into account the person making an argument; any argument that is of the form “Person is X, therefore his argument is wrong” is bad because it commits the ad-hominem fallacy. Instead, a ‘good’ counter-argument should attack either the logical structure of the argument or its premises.

But then consider the common-sense scenario where you’re asking friends for advice. Let’s say that you want some pointers on swimming, and you go to three friends. Which of the following friends would you pay more attention to: Tom, who is a competitive swimmer, Jasmine, who is a casual swimmer, or Ben, who does not know how to swim? It’s likely that you’ll pay special attention to Tom and Jasmine, but ignore (or heavily discount) whatever Ben says.

Credibility counts when it comes to practical matters. Just because Ben makes a convincing and rhetorically-compelling argument doesn’t change the fact that he hasn’t tested it against reality. Don’t get me wrong: I’m not saying that Ben is certainly mistaken — he could be right, for all we know. But it’s just as likely that he’s wrong, and if you’re like most practitioners, you don’t have a lot of time to test the assertions that he makes. The common-sense approach is to go with whoever seems more credible, along with the assumption that it still might not work for you; we could say here that you’re applying a probability rating to each piece of advice, where the rating is tied to the believability of the person giving said advice.

When I began writing this essay I didn’t expect to defend ad-hominem as a second-order implication of Dalio’s believability. But then I realised: this isn’t about argumentation — this is about figuring out what works. You don’t necessarily have to debate anyone; you can simply apply this rule in your head in place of whatever gut-level intuition you currently use. And it’s probably worth a reminder that you don’t have to make a black-or-white assessment when it comes to low-believability advice. The protocol that Dalio prescribes asks that you ‘do the bare minimum to evaluate what they have to say … on the off-chance that they have an objection you’ve not considered before’. Or, to phrase that in Bayesian terms: keep the objection in mind, but apply a low confidence rating to it.

Why does believability work? I think it works because argumentation alone cannot determine truth. This is a corollary to the observation at the beginning of this essay that ‘rhetoric is powerful’ — and I think some form of this is clear to those of us who have had to live with decisions at an organisational level. Consider: have you ever been in a situation where there were multiple, equally valid, compellingly plausible options — and it isn't clear which option is best? In many organisations, the most effective answer when one has reached this point isn't to debate endlessly; instead, it’s far better to agree to a test that can falsify one argument or the other, and then run it to see which survives reality.

The logical conclusion from “you can't evaluate advice by argument alone” is that you'll have to use a different metric to weight the validity of an aforementioned argument. The best test is a check against reality. The next best test is to look for proxies of checking against reality — such as a track record for acting in a given domain. This is, fundamentally, why believability works — it acts as a proxy for reality, in the form of “has this person actually checked?” And if not, it's probably okay to discount their opinion.

Lower Levels of the Pyramid

Below believability we get to sketchier territory. The fourth level of my proposed hierarchy of evidence is advice from people who have actually tried the advice in question. This isn't as good as “believable expert who has succeeded in domain”, but it's still better than “random shmuck who writes about self-help that he hasn't tried”.

Advice from this level of practitioner is still useful because you may now compare implementation notes with each other. A person who has attempted to put some knowledge into practice is likely to also have some insight into the challenges of implementing aforementioned knowledge. These notes are useful, in the same way that case studies are useful — they provide a record of what has worked, and in what context.

There’s also one added benefit to studying advice from this level of practitioner: you may check against the person's results if you have no time to implement such an intervention yourself. It doesn’t cost you much to circle back to a self-help blogger or person and ping them: “hey, your experiment with deliberate practice — how did it go?” I’ve occasionally found it worthwhile to schedule 15-minute Skype calls with willing practitioners to probe them for the results of their experience.

The last and final rung on my proposed hierarchy is ‘plausible argument’. This is the lowest-level form of evidence, because — as I’ve argued before — the persuasiveness of an argument should not affect your judgment of the argument’s actual truth.

Some friends have pointed out to me that the structure of an argument should at least be logically valid — that is, that the argument should be free of argumentative fallacies, and have a propositionally valid argumentation form. If an argument fails even this basic test, surely it cannot be correct?

I think there’s some merit to this view, but I also think that there’s relatively little one can gain from studying the internal consistency of an argument. To state this differently, I think that it is almost always better to run a test for some given advice — if such a cheap test exists! — compared to endlessly ruminating about its potential usefulness.

Luck and Other Confounding Variables

At this point you're probably ready to leap out of your chair to point out: “What’s the use of relying on this hierarchy of evidence? The expert that you are seeking advice from might just have been lucky!”

Yes, and luck is a valid objection! Confounding variables like luck are one of the biggest problems we face when operating at the level of anecdotal evidence. We don’t have the rigour that comes with the scientific method, where we can isolate variables from each other.

(Dealing with luck is also one reason Dalio’s believability standard calls for three successes, to reduce the probability that the practitioner is a fluke.)

But why stop at luck? Luck isn't the only confounding variable when it comes to anecdata, after all. There's also:

  • Genetics. The expert could have certain genetic differences that make it easier for them to do what they do.
  • Cultural environment. The expert could be giving you advice that only works in their culture — organisational or otherwise.
  • Prerequisite sub-skills. ‘The curse of expertise’ is what happens when an expert practitioner forgets what it's like to be a novice, and gives advice that can't work without a necessary set of sub-skills — skills that the expert mastered too long ago to remember.
  • Context-specific differences. The expert could be operating in a completely different context — for instance, advice from someone operating in stock picking might not apply cleanly to those running a business.
  • External advantages. Reputation, network, and so on.

Let’s say that you are given some advice by a believable person with exposure to downside effects — which is the second highest level of credibility in my proposed hierarchy of practical evidence. You attempt to put the advice to practice, but you find that it doesn’t work. What do you conclude?

The naive view is to conclude that the advice is flawed, the expert is not believable, or that some confounding variable might have gotten in the way. For instance, you might say “oh, that worked for Bill Gates, but it’s never going to work for me — Gates got lucky.”

What is a practitioner to do in the face of so many confounding variables? Should you just throw your hands in the air and say that there’s absolutely no way to know if a given piece of advice is useful? Should you just give up on advice in general?

Well, no, of course not! There are far better moves available to you, and I want to spend the remainder of this essay arguing that this is the case.

One of the ideas that I’ve sort of snuck into this essay is the notion of applying a probability rating to some statement of belief. For instance, in the segment about falsifiability, earlier in the essay, I mentioned that we could take our belief in a hypothesis (with swan colours, I said that perhaps we start with 0.6) and then increase or decrease that belief as we gather more evidence. Some people call this activity ‘Bayesian updating’, and I’d like to suggest that we can adapt this to our practical experimentation.

Here’s a recent example by way of demonstration: a few months back I summarised Cal Newport’s Deep Work, and started systematically applying the ideas from his book to my life. I’ve found that Newport’s method of ‘taking breaks from focus’ to be particularly difficult to implement — I would try it for one or two days, and then regress to where I was before.

I started the experiment with the notion that Newport was believable — after all, he mentioned that he used the techniques in his book to achieve tenure at a relatively young age. My estimation of the technique working for me began at around 0.8.

After putting his technique to the test and failing at getting it to work, I sat back to consider the confounding variables:

  • Luck: was Newport lucky? I didn’t think so. Luck has little to do with the applicability of this technique.
  • Genetics: could Newport have genetic advantages that allowed him to focus for longer? This is plausible. Thanks to the power of twin studies, we know that there is a genetic basis for self-control — around 60% of the variance if we take this meta-analysis as evidence.
  • Prerequisite sub-skills: could Newport have built pre-requisite sub-skills? This is also plausible. Newport has had a long history of developing his ability to focus in his previous life as a postdoc researcher at MIT. There may be certain intermediate practices or habits that I would have to cultivate in order to attempt his technique successfully.
  • Context-specific differences: Newport could also benefit from his work environment. He has said that an ability to perform Deep Work is what sets good academics apart from their peers. This might provide him with a motivational tailwind that others might not possess.
  • External advantages: I can’t think of any external advantages Newport might have deployed in service of this technique.

It’s important to note here that I am not committing to any one reason. There are too many confounding variables to consider, so I’m not attempting to do more than generate different plausible explanations. These plausible explanations will each have a confidence rating attached to them; as I continue to adapt the technique to my unique circumstances, my intention is to update my probability estimate for each explanation accordingly. These explanations exist as potential hypotheses — I am in essence asking the question: “why isn't this working for me, and what must I change before it does?”

Regardless of my success, I will never know for sure why Newport’s technique works for him and not for me. I will only ever have suspicions … measured by those probability judgments. This is a fairly important point to consider: as a practitioner, I am often only interested in what works for me. Rarely will I be interested in some larger truth like why some technique works for one person but not for another. This is, I think, a point in favour of a personal epistemology: the standards for truth are lower when you’re dealing with effects on a sample size of one.

If this is a form of Bayesian updating, when does the updating occur? The answer is that it occurs during the application. In my attempt to apply Newport’s technique, I’ve gained an important piece of information: I now know that without modification, Newport’s ‘breaks from focus’ advice is unlikely to work for me. I would have to modify his advice pretty substantially. The update is negative — my overall confidence in this particular technique is now down to 0.7.

The path forward is clear: I may attempt to continue experimenting with Newport’s technique — or I may shelve this piece of advice when my confidence dips below … 0.5, say. There are many variations to consider before I hit that level, though. I could attempt to build some easier, self-control sub-skills first. Or I could attempt to meditate to grow my ability to focus. I could clear my workspace of distractions, or attempt to pair Newport’s technique with a pomodoro timer. And even if I fail and shelf Newport’s technique, I could still stumble upon a successful variation by some other practitioner years from now, and decide to take the technique down from my mental shelf to have another go at it.

The point of this example is to demonstrate that confounding variables are a normal thing we have to grapple with as practitioners. We don’t have the luxury of the scientific method, or the clarity of more rigorous forms of knowledge. We have only our hunches, which we update through trial and error. But even hunches, properly calibrated, can be useful.

Fin

I have presented an epistemology that has guided my practice for the past seven years or so. I’ve found it personally useful, and I believe much of it is common sense made explicit. But I also know that this epistemology isn’t at all finished; it is merely the first time that I’ve attempted to articulate it in a single essay.

To summarise:

  • Let reality be the teacher. This applies for both scientific knowledge and anecdotal evidence.
  • When spelunking in the research literature, keep in mind that science is interested in what is true, not necessarily what is useful to you.
  • When evaluating anecdata, weight advice according to a hierarchy of practical evidence.
  • When testing advice against reality, use some form of Bayesian updating while iterating, in order to filter out the confounding variables that are inherent in any case study.

If I were to compress this framework for putting mental models to practice into a single motivating statement, I would say that the entire framework can be reconstructed through a personal pursuit of the truth — and that in the context of practice, this truth takes the form of the question: “What can I do to make me better? What is it that works for me?”

Now perhaps you’ve noticed the meta aspect of this epistemology.

What happens if we apply the standard of truth that I’ve developed in this essay to the very series within which I’ve chosen to publish it in?

The answer is this: my framework should not be very convincing to you. I am not believable in any of the domains that I currently practice in: I have built two successful organisations of under 50 people each; I’ve had only one business success. I am at least a decade away from becoming believable at the level of success that Dalio demands.

What I can promise you, however, is that everything in this series — with one exception* — has been tested in personal practice. I currently apply this epistemology of practice to my own life. I spent a few ‘misguided’ years on epistemic rationality training, of the sorts recommended by LessWrong. I still sometimes feel the itch to do rational choice analysis — even though I know that such analysis works best in irregular domains. And I have spent the last three years in the pursuit of tacit mental models of expertise.

(*The only exception is the critical decision method, covered in Part 5. As of writing, I've only had two months of experience with the method.)

I’ve spent a great deal of time on the rhetoric of this series. I’ve used narrative to propel the reader through certain segments where I’ve had to tackle abstract ideas, and I have attempted to summarise the least controversial, most established findings of the judgment and decision making literature from which rationality research sits upon. But you should not believe a single word that I have written. In fact, I would go further and suggest that your degree of belief should be informed mostly by that which you have tested against reality. In Bruce Lee’s words: absorb what is useful, discard what is useless and add what is specifically your own. In Bayesian terms: everything I say should be regarded as an assertion with a confidence rating of far less than 1.

Perhaps this is taking it too far. But then again, perhaps Hume had a point. There is ultimately no truth except that which you uncover for yourself.

I hope you’ve found this series useful.

Read the whole story
superlopuh
252 days ago
reply
Share this story
Delete

Book Review: Zero To One

1 Share

I.

Zero To One might be the first best-selling business book based on a Tumblr. Stanford student Blake Masters took Peter Thiel’s class on startups. He posted his notes on Tumblr after each lecture. They became a minor sensation. Thiel asked if he wanted to make them into a book together. He did.

The title comes from Thiel’s metaphor that ordinary businessmen like restaurant owners take a product “from 1 to n” (shouldn’t this be from n to n+1?) – they build more of something that already exists. But the greatest entrepreneurs bring something “from 0 to 1” – they invent something that has never been seen before.

The book has various pieces of advice for such entrepreneurs. Three sections especially struck me: on monopolies, on secrets, and on indefinite optimism.

II.

A short review can’t fully do justice to the book’s treatment of monopolies. Gwern’s look at commoditizing your complement almost does (as do some tweets). But the basic economic argument goes like this: In a normal industry (eg restaurant ownership) competition should drive profit margins close to zero. Want to open an Indian restaurant in Mountain View? There will be another on the same street, and two more just down the way. If you automate every process that can be automated, mercilessly pursue efficiency, and work yourself and your employees to the bone – then you can just barely compete on price. You can earn enough money to live, and to not immediately give up in disgust and go into another line of business (after all, if you didn’t earn that much, your competitors would already have given up in disgust and gone into another line of business, and your task would be easier). But the average Indian restaurant is in an economic state of nature, and its life will be nasty, brutish, and short.

This was the promise of the classical economists: capitalism will optimize for consumer convenience, while keeping businesses themselves lean and hungry. And it was Marx’s warning: businesses will compete so viciously that nobody will get any money, and eventually even the capitalists themselves will long for something better. Neither the promise nor the warning has been borne out: business owners are often comfortable and sometimes rich. Why? Because they’ve escaped competition and become at least a little monopoly-like. Thiel says this is what entrepreneurs should be aiming for.

He hates having to describe how businesses succeed, because he thinks it’s too anti-inductive to reduce to a formula:

Tolstoy opens Anna Karenina by observing “All happy families are alike; each unhappy family is unhappy in its own way.” Business is the opposite. All happy companies are different: each one earns a monopoly by solving a unique problem. All failed companies are the same: they failed to escape competition.

But he grudgingly describes four ways that a company can successfully reach monopolyhood:

1. Proprietary technology. This one is straightforward. If you invent the best technology, and then you patent it, nobody else can compete with you. Thiel provocatively says that your technology must be 10x better than anyone else’s to have a chance of working. If you’re only twice as good, you’re still competing. You may have a slight competitive advantage, but you’re still competing and your life will be nasty and brutish and so on just like every other company’s. Nobody has any memory of whether Lycos’ search engine was a little better than AltaVista’s or vice versa; everybody remembers that Google’s search engine was orders of magnitude above either. Lycos and AltaVista competed; Google took over the space and became a monopoly.

2. Network effects. Immortalized by Facebook. It doesn’t matter if someone invents a social network with more features than Facebook. Facebook will be better than their just by having all your friends on it. Network effects are hard because no business will have them when it first starts. Thiel answers that businesses should aim to be monopolies from the very beginning – they should start by monopolizing a tiny market, then moving up. Facebook started by monopolizing the pool of Harvard students. Then it scaled up to the pool of all college students. Now it’s scaled up to the whole world, and everyone suspects Zuckerberg has somebody working on ansible technology so he can monopolize the Virgo Supercluster. Similarly, Amazon started out as a bookstore, gained a near-monopoly on books, and used all of the money and infrastructure and distribution it won from that effort to feed its effort to monopolize everything else. Thiel describes how his own company PayPal identified eBay power sellers as its first market, became indispensible in that tiny pool, and spread from there.

3. Economies of scale. Also pretty straightforward, and especially obvious for software companies. Since the marginal cost of a unit of software is near-zero, your cost per unit is the cost of building the software divided by the number of customers. If you have twice as many customers as your nearest competitor, you can charge half as much money (or make twice as much profit), and so keep gathering more customers in a virtuous cycle.

4. Branding Apple is famous enough that it can charge more for its phones than Amalgamated Cell Phones Inc, even for comparable products. Partly this is because non-experts don’t know how to compare cell phones, and might not trust Consumer Reports style evaluations; Apple’s reputation is an unfakeable sign that their products are pretty good. And partly it’s just people paying extra for the right to say “I have an iPhone, so I’m cooler than you”. Another company that wants Apple’s reputation would need years of successful advertising and immense good luck, so Apple’s brand separates it from the competition and from the economic state of nature.

Thiel continues with various counterintuitive pieces of wisdom. Don’t try to “disrupt” your field – if you’re “disrupting” someone, it means you’re competing with them, and making enemies who will try to hold you back. Don’t try to be the “first mover” (Yahoo was the first-mover in the search engine space), instead try to be the “last mover” whom nobody is able to supplant. Etc, etc. Just try to get a monopoly or something like it.

Is all of this a plot against the public? Monopolies are usually viewed as cheating the system and preventing progress; is Thiel promoting that behavior to the detriment of society? Well, obviously he says he isn’t:

The problem with a competitive business goes beyond lack of profits. Imagine you’re running one of those restaurants in Mountain View. You’re not that different from dozens of your competitors, so you’ve got to fight hard to survive. If you offer affordable food with low margins, you can probably pay employees only minimum wage. And you’ll need to squeeze out every efficiency: that’s why small restaurants put Grandma to work at the register and make the kids wash dishes in the back. Restaurants aren’t much better even at the very highest rungs, where reviews and ratings like Michelin’s star system enforce a culture of intense competition that can drive chefs crazy. (French chef and winner of three Michelin stars Bernard Loiseau was quoted as saying, “If I lose a star, I will commit suicide.” Michelin maintained his rating, but Loiseau killed himself anyway in 2003 when a competing French dining guide downgraded his restaurant.) The competitive ecosystem pushes people toward ruthlessness or death.

A monopoly like Google is different. Since it doesn’t have to worry about competing with anyone, it has wider latitude to care about its workers, its products, and its impact on the wider world. Google’s motto — “Don’t be evil” — is in part a branding ploy, but it’s also characteristic of a kind of business that’s successful enough to take ethics seriously without jeopardizing its own existence. In business, money is either an important thing or it is everything. Monopolists can afford to think about things other than making money; non-monopolists can’t. In perfect competition, a business is so focused on today’s margins that it can’t possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival: monopoly profits.

So monopolies’ advantages include being better for employees, more socially responsible, and able to engage in long-term thinking. The classic examples of this (which I don’t think Thiel brought up) are Bell Labs and Xerox PARC. Two monopolistic companies with more money than they knew what to do with started super-basic-blue-sky research centers that ended up creating many of the technologies that shaped the modern world (Bell Labs, started by AT&T, helped invent the transistor, the laser, information theory, UNIX, C, C++, radio astronomy, etc; PARC, started by Xerox, helped invent Ethernet, laser printing, the personal computer, graphical user interfaces, object-oriented programming, bitmaps, and the LCD.) Google X wants to be the modern version of this kind of thing, though I don’t know how much success they’ve had so far.

On the other hand, all of the classical disadvantages of monopolies are still there. Monopolies remove the pressure to do a good job – whether that’s in keeping prices low, keeping working conditions tolerable, or in keeping products and service high-quality. They lower the diversity of an industry, making it more likely to get stuck in an evolutionary blind alley it can’t get out of; they increase the risk of merging with government into a crony capitalism. A wolf sheltered from survival-of-the-fittest for too long becomes a Chihuahua; Amazon sheltered from survival-of-the-fittest for too long becomes the DMV.

(also, isn’t Thiel the guy who wanted floating independent seasteads because competitive governance would break the monopoly of existing nation-states and lead to a revolutionary improvement in institutional capacity? Doesn’t that suggest even he acknowledges monopolies are often bad?)

I don’t think this is one of those issues that’s going to get decisively solved in a few paragraphs. Moloch and Slack are the new yin and yang, the new chaos and order; their interplay creates the Ten Thousand Things. Err too far towards competition and everyone works themselves to death in garment sweatshops; err too far towards monopoly and everyone sits at a desk filling out forms and backstabbing each other until the lights slowly go out. It’s only in the collision zone between the two that anything interesting ever happens.

One could rescue Thiel’s position by assuming that competition will always be with us. Google’s pretty monopoly-like, but even they can’t rest on their laurels too long. It’s not just that Bing might take over, but that advertisers might get better non-search-engine ways to place ads, Facebook might come up with better ways to target ads, some alternate platform like cell phones or VR might take over from classic Internet searches, or something else. Whatever their concern, real-life Google sure does seem to put a lot of effort into being competitive. So sure, maybe one has to find the sweet spot between perfect competition and perfect monopoly, but one could argue that right now only the most monopolistic companies are near that sweet spot.

III.

The rest of Zero To One becomes less directly about the startup world, and more about deep social trends that good startup founders will have to buck. One such trend – which Thiel approaches in a lot of different equivalent ways – is the loss of belief in secrets. People no longer believe that there are important things that they don’t know, but which they could discover if they tried a little harder.

Past scientific discoveries came from a belief in secrets. Isaac Newton wondered why apples fell, thought “Maybe if I work really hard on this problem, I can discover something nobody has ever learned before”, and then set out to do it. Modern people aren’t just less likely to think this way. They’re actively discouraged from it by a culture which mocks stories like Newton’s as “the myth of the lone genius”. Nowadays people get told that if they think they’ve figured out something about gravity, they’re probably a crackpot. Instead, they should wait for very large government-funded programs full of well-credentialled people to make incremental advances.

Good startups require a belief in secrets, where “secret” is equivalent to “violation of the efficient market hypothesis”. You believe you’ve discovered something that nobody else has: for example, that if you set up an online bookstore in such-and-such a way today, in thirty years you’ll be richer than God. This is an outrageously arrogant claim: that you have spotted a hundred-billion-dollar bill lying on the sidewalk that everyone else has missed. But only people who believe something like it can noncoincidentally found great companies. You must believe there are lucrative secrets hidden in plain sight.

Thiel relates this to the decline of cults (see these two essays fleshing out the phenomenon). Although cults may not be desirable, they are the failure mode of individuals and small groups trying to throw off conventional wisdom and discover profound new ways of looking at the world. Have we lost our cults because we no longer fail at this task, or because we no longer attempt it at all?

Belief in secrets is connected to belief in one’s own reasoning abilities. Modern conventional wisdom says armchair reasoning never works; any idea you prove true in your head is useless until it’s been exhaustively tested in real life, and you’re more likely to get some other (true) idea out of the exhaustive testing than to validate your armchair speculation. As a corollary, the more steps in your proof, the less likely it is, since each one exponentially increases the error rate of your final conclusion. Since your armchair reasoning is useless, you are unlikely to ever discover a secret (except perhaps by chance, if you randomly do experiments no one else has ever done). The only thing that might not be useless is large institutions working together to gradually advance knowledge with lots of testing, who effectively buy many lottery tickets hoping one will pay off.

The modern skepticism about secrets and reasoning implies a similar skepticism about planning. If the argument against multi-step reasoning is right, then a mildly Internet-famous scene from Harry Potter And The Methods Of Rationality is right too:

Father had told Draco about the Rule of Three, which was that any plot which required more than three different things to happen would never work in real life. Father had further explained that since only a fool would attempt a plot that was as complicated as possible, the real limit was two.

This disbelief in planning suggests, not a strategy, but a sort of meta-strategy. Do something vaguely in the space of what you want to do, don’t commit yourself to a specific plan, watch what happens, iterate, keep your options open at all time, and be prepared to pivot quickly once you know more.

But Thiel says the most successful visionaries of the past did the opposite of this. They knew what they wanted, planned a strategy, and achieved it. The Apollo Program wasn’t run by vague optimism and “keeping your options open”. It was run by some people who wanted to land on the moon, planned out how to make that happen, and followed the plan. Not slavishly, and certainly they were responsive to evidence that they should change tactics on specific points. But they had a firm vision of the goal in their minds, an approximate vision of what steps they would take to achieve it, and a belief that acheiving an ambitious long-term plan was the sort of thing that people could be expected to do. And great startups like SpaceX are much the same. Elon Musk started with a n-step plan to get to Mars, and he’s currently about halfway through.

He gives one particularly striking example of the past’s attitude to secrets and planning:

Bold plans were not reserved just for political leaders or government scientists. In the late 1940s, a Californian named John Reber set out to reinvent the physical geography of the whole San Francisco Bay Area. Reber was a schoolteacher, an amateur theater producer, and a self-taught engineer. Undaunted by his lack of credentials, he publicly proposed to build two huge dams in the Bay, construct massive freshwater lakes for drinking water and irrigation, and reclaim 20,000 acres of land for development. Even though he had no personal authority, people took the Reber Plan seriously. It was endorsed by newspaper editorial boards across California. The U.S. Congress held hearings on its feasibility. The Army Corps of Engineers even constructed a 1.5-acre scale model of the Bay in a cavernous Sausalito warehouse to simulate it. These tests revealed technical shortcomings, so the plan wasn’t executed.

But would anybody today take such a vision seriously in the first place? In the 1950s, people
welcomed big plans and asked whether they would work. Today a grand plan coming from a schoolteacher would be dismissed as crankery, and a long-range vision coming from anyone more powerful would be derided as hubris. You can still visit the Bay Model in that Sausalito warehouse, but today it’s just a tourist attraction: big plans for the future have become archaic curiosities.

This is a fascinating story (and remember that early San Francisco was settled by New England Puritans; there’s something super-Puritan about all this, right down to it being a schoolteacher) and does a great job of highlighting the contrast between past and present attitudes.

But how much of a flaw is it that the Reber Plan would in fact not have worked? Suppose a thousand enterpreneurs try to create exciting long-term plans for their businesses, each of which requires guessing ten binary variables in advance. And suppose the vague-ists are right, nobody can do armchair reasoning or long-term planning, and all of their guesses are random. By chance, one of the thousand entrepeneurs will get all ten variables right, his plan will go perfectly, and he’ll become a multi-billionaire and land a rocket on Mars. He will be the only person we ever hear about and the only person who ever becomes a stock example, and it will look like “Wow, multi-step reasoning and long-range planning can work well after all!”

This is the proper canned response that the conformist parts of my mind generated after three seconds. But is it true? Elon Musk has founded at least three super-successful companies that have executed decade-long plans; lightning shouldn’t strike the same place twice. Newton didn’t just discover gravity, he discovered optics, calculus, the laws of motion, and [insert ten page list of other things Newton discovered here].

And, uh, Thiel compares these sorts of long-term plans to “conspiracies”. And he himself is implicated in a conspiracy – his successful destruction of Gawker is now the subject of a book titled Conspiracy: Peter Thiel, Hulk Hogan, Gawker, And The Anatomy Of Intrigue. His Gawker accomplishment was exactly the kind of ambitious long-range multi-step plan he describes as possible in the book – and the book was written long before it bore fruit.

Some of the smartest people I know say that Thiel’s endorsement of Donald Trump was the same sort of complicated plot. Thiel endorsed Trump at a time when no other famous intellectual would touch him. Trump won and followed a spoils strategy of rewarding his early supporters (like how he made Jeff Sessions the Attorney General), and Thiel got 100x the influence he would have if he’d had to fight against every other important person for prestige in the Clinton administration. I am still trying to figure out what happened with this – my impression is that for about a month after Trump won, he was doing a lot of things that bore Thiel’s fingerprints, and after that he didn’t. Either there was some kind of early break between the two of them, or Thiel decided to operate very quietly – a few hints of leaked information suggest the latter. If he’s still involved, this is an even stronger example than Gawker.

I’m bringing these things up because once you write a book saying “Hey guys, conspiracies are totally doable and often successful”, and then a few years later you succeed at multiple ambitious conspiracies that destroy your enemies and give you vast national influence, I think you are allowed to say that this is possibly something other than coincidence and survivorship bias.

But then it equally becomes fair to say that Peter Thiel is a billionaire CEO Stanford professor chess master, and Elon Musk is, well, Elon Musk. Both may be better at planning than the average person. Suppose you have a ten-step plan. And suppose you’re good enough at planning that you have a 90% chance to carry out each step. That means a 35% chance of all ten steps going without a hitch; start three companies or Gawker-destruction plans, and one will succeed. Now suppose someone only a little worse at planning – 70% success rate per step – tries the same thing. Now their per-plan chance of destroying Gawker is less than 3%. In the real world, where there’s more variance between plan steps, I think this becomes even more pronounced.

On the other hand, each successful SpaceX or Gawker-elimination-plan brings huge benefits to the world. We are stuck in the awkward position where a heuristic of “Go ahead, think big” will be inappropriate for and predictably bankrupt the vast majority of people, but a heuristic of “Think small and don’t trust yourself” will create a world of sub-par rockets and tragically un-destroyed gossip rags. Which do we choose? This is probably the wrong question; nobody controls the heuristic supply, and the one that works for most people will catch on. Under this model, Thiel is doing the public service of saying “Hey, if you’re a smart person, then despite what everybody says this whole ‘make big plans’ thing might actually work for you.”

(good thing everybody has an accurate, undistorted estimate of whether they are a smart person or not!)

I really liked this part of the book. When every intelligent person you trust is pushing one heuristic, it can be really refreshing to hear someone else intelligent and successful say exactly the opposite. Not even prove the opposite – I don’t think Thiel makes all that strong a case in this book – just say “Hey, think about the fact that this piece of conventional wisdom might be totally wrong”. This is almost the first time I’ve heard this said about the “don’t make complex multi-step plans” piece of conventional wisdom, and it was fun to hear this new perspective that I’m going to have to wrestle with from now on.

IV.

Zero To One has one more section on secrets and planning, where it expands them society-wide into the ideas of definite optimism vs. indefinite optimism.

Definite optimism is Thiel’s take on the can-do philosophy of the 1950s. We can-do the Apollo Program, so let’s get to work. We can-do John Reber’s plan to dam the San Francisco Bay, so let’s start debating it. People believed anything was possible, so they made grand plans and carried them out. Progress would happen because people would have great ideas and force them into being.

The 2010s aren’t less optimistic, they’re just less definite. We still believe in the impersonal force of Progress, we just doubt any existing plan’s ability to serve as its avatar. Nobody will say “Let’s dam San Francisco Bay”, they’ll say “let’s promote innovation” or “let’s grow the economy”. It is assumed there are no secrets to discover or grand plans to implement, but everything will get better anyway based on- I don’t know, some sort of principle that it should, plus millions of small actors doing little things below the threshold of notability in the right direction.

Again the weird modern belief in “the myth of the lone genius” (not belief in the lone genius, belief in the mythicism of it) comes into play. We are perhaps glad that there is convenient online retail, but this does not translate into appreciation of Jeff Bezos. Online retail came into being because it’s a part of Progress; Jeff Bezos is just some annoying guy who claimed credit and captured the profits.

(the obvious counterargument here seems to be that if Jeff Bezos didn’t do the admittedly hard work of creating an online retail giant, somebody else would have, perhaps a little later and a little worse; hundred billion dollar bills don’t lie on the sidewalk literally forever. I’m not sure what Thiel thinks of this; at the very least he might say our society fails to appreciate that some specific person does have to do the work for the work to happen.)

The flagship industry of the definite optimism of the 1950s was engineering. The flagship industry of the indefinite optimism of the 2010s is finance. Finance is about “making money when you have no idea how to create wealth”. While the engineers plan out specific dams and rockets and so on, the more abstract levels of finance invest in “the market”, a vague aggregate of all economic activity which is expected to go up because Progress. And so:

Think about what happens when successful enterpreneurs sell their company. What do they do with the money? In a financialized world, it unfolds like this:

– The founders don’t know what to do with it, so they give it to a large bank.
– The bankers don’t know what to do with it, so they diversify by spreading it across a portfolio of investors.
– Institutional investors don’t know what to do with their managed capital, so they diversify by amassing a portfolio of stocks.
– Companies try to increas ehtie share price by generating free cash flows. If they do, they issue dividends or buy back shares and the cycle repeats.

At no point does anyone in the chain know what to do with money in the real economy. But in an indefinite world, people actually prefer indefinite optionality; money is more valuable than anything you could possibly do with it. Only in a definite future is money a means to an end, not the end itself.

The flagship government of indefinite optimism is liberalism, here including both the standard Clinton-issue variety and libertarianism. Liberalism doesn’t take any specific position about what the good life is, or how to promote it – it is a neutral arbiter that enforces content-independent laws. It can ban or promote the construction of monuments, but it cannot and will not say “Ten Commandments monument good, Satanist monument bad” – it either accepts or rejects both. The culmination of this style of indefinite liberalism is Rawls’ veil of ignorance, where government only works insofar as it approximates what people would create if they knew nothing about their own opinions.

The flagship level of indefinite optimism is the meta-level. Come up with some principles that should work, like “capitalism” or “evolution”, then let them figure everything out.

Like the section on secrets and planning, this succeeds in being an interesting critique of something I had previously thought so obviously good that I had never bothered thinking of criticisms of it before. But its specifics are a bit weird – the Burkean/Chestertonian argument for conservativism goes that our current traditions are the outcome of exactly the same sort of incremental experimentation that indefinite optimists love, and that our own multi-step reasoning and planning telling us that X new law will improve things is too fallible to trust. So if our philosophy of government isn’t liberal, libertarian, or conservative, what is it? Thiel mentions two “definite optimistic” philosophers – Marx and Hegel – and neither is the sort to inspire too much confidence. Maybe we should be imagining Eisenhower-era America – liberal-ish, but still with grand visions? I don’t know enough about that era to know whether that era really had a unified version of the good life, or what shifting more in that direction would entail.

Also, different philosophies work for different situations. The virtues of feudalism are more relevant to a sprawling medieval empire than to modern Denmark. Indefinite liberalism seems suited to a country where in fact nobody agrees on anything; one with deep religious and racial divisions, caught in the grip of a smoldering culture war. If nobody can agree on what the good is, then refereeing everybody as they pursue their own private versions of the good might be the best you can maange.

V.

There’s a lot more to this book, but it all seems to be pointing at the same central, hard-to-describe idea. Something like “All progress comes from violations of the efficient market hypothesis, so you had better believe these are possible, and you had better get good at finding them.”

The book begins and ends with a celebration of contrarianism. Contrarians are the only people who will ever be able to violate the EMH. Not every weird thing nobody else is doing will earn you a billion dollars, but every billion-dollar plan has to involve a weird thing nobody else is doing.

Unfortunately, “attempt to find violations of the EMH” is not a weird thing nobody else is doing. Half of Silicon Valley has read Zero To One by now. Weirdness is anti-inductive. If everyone else knows weirdness wins, good luck being weirder than everyone else.

Thiel describes how his venture capital firm would auto-reject anyone who came in wearing a suit. He explains this was a cultural indicator: MBAs wear suits, techies dress casually, and the best tech companies are built by techies coming out of tech culture. This all seems reasonable enough.

But I have heard other people take this strategy too far. They say suit-wearers are boring conformist people who think they have to look good; T-shirt-wearers are bold contrarians who expect to be judged by their ideas alone. Obviously this doesn’t work. Obviously as soon as this gets out – and it must have gotten out, I’ve never been within a mile of the tech industry and even I know it – every conformist putting image over substance starts wearing a t-shirt and jeans.

When everybody is already trying to be weird, who wins?

Part of the answer is must be that being weird is a skill like any other skill. Or rather, it’s very easy to go to an interview with Peter Thiel wearing a clown suit, and it will certainly make you stand out. But will it be “contrarian”? Or will it just be random? Anyone can conceive of the idea of wearing a clown suit; it doesn’t demonstrate anything out of the ordinary except perhaps unusual courage. The real difficulty is to be interestingly contrarian and, if possible, correct.

(I wrote that paragraph, and then I remembered that I know one person high up in Peter Thiel’s organization, and he dresses like a pirate during random non-pirate-related social situations. I always assumed he didn’t do this in front of Peter Thiel, but I just realized I have no evidence for that. If this advice lands you a job at Thiel Capital, please remember me after you’ve made your first million.)

Another part of the answer must be that when everyone is competing on weirdness, the winners will be the people who are actually weird. The people who unavoidably do weird things because they are constitutionally weird people. There is a certain degree to which an ordinary person can relax constraints on their behavior and act and think in a weirder way than they ordinarily would. After that, you actually have to just be a strange kind of guy.

Of the six people who started PayPal, four had built bombs in high school. Five were just 23 years old—or younger. Four of us had been born outside the United States. Three had escaped here from communist countries: Yu Pan from China, Luke Nosek from Poland, and Max Levchin from Soviet Ukraine. Building bombs was not what kids normally did in those countries at that time.

The six of us could have been seen as eccentric. My first-ever conversation with Luke was about how he’d just signed up for cryonics, to be frozen upon death in hope of medical resurrection. Max claimed to be without a country and proud of it: his family was put into diplomatic limbo when the USSR collapsed while they were escaping to the U.S. Russ Simmons had escaped from a trailer park to the top math and science magnet school in Illinois. Only Ken Howery fit the stereotype of a privileged American childhood: he was PayPal’s sole Eagle Scout. But Kenny’s peers thought he was crazy to join the rest of us and make just one-third of the salary he had been offered by a big bank. So even he wasn’t entirely normal…

The lesson for business is that we need founders. If anything, we should be more tolerant of founders who seem strange or extreme; we need unusual individuals to lead companies beyond mere incrementalism.

Signing up for cryonics doesn’t give you a business advantage. But it indicates that you are probably good at thinking outside the box. People who learn that thinking outside the box is a useful skill and decide to try it with zero experience are always going to lose to people who have been doing since they could speak at all.

Or as a wise man once said, “when the going gets weird, the weird turn pro”.

Read the whole story
superlopuh
279 days ago
reply
Share this story
Delete

Saturday Morning Breakfast Cereal - Ducklings

1 Comment and 8 Shares


Click here to go see the bonus panel!

Hovertext:
Mallards are a natural servant race.


Today's News:
Read the whole story
superlopuh
281 days ago
reply
Share this story
Delete
1 public comment
jlvanderzwan
281 days ago
reply
No relation
Next Page of Stories