The 3-ladder system of social class in the U.S.

Typical depictions of social class in the United States posit a linear, ordered hierarchy. I’ve actually come to the conclusion that there are 3 distinct ladders, with approximately four social classes on each. Additionally, there is an underclass of people not connected to any of the ladders, creating an unlucky 13th social class. I’ll attempt to explain how this three-ladder system works, what it means, and also why it is a source of conflict. The ladders I will assign the names Labor, Gentry, and Elite. My specific percentage estimates of each category are not derived from anything other than estimation based on what I’ve seen, and my limited understanding of the macroeconomics of income in the United States, so don’t take them for more than an approximation. I’ll assess the social role of each of these classes in order, from bottom to top.

This is, one should note, an exposition of social class rather than income. Therefore, in many cases, precise income criteria cannot be defined, because there’s so much more involved. Class is more sociological in nature than wealth or income, and much harder to change. People can improve their incomes dramatically, but it’s rare for a person to move more than one or two rungs in a lifetime. Social class determines how a person is perceived, that person’s access to information, and what opportunities will be available to a person.

Underclass (10%). The underclass are not just poor, because there are poor people on the Labor ladder and a few (usually transiently or voluntarily) on the Gentry ladder who are poor. In fact, most poor Americans are not members of the Underclass. People in the Underclass are generationally poor. Some have never held jobs. Some are third-generation jobless, even. Each of these ladders (Labor, Gentry, Elite) can be seen as an infrastructure based, in part, on social connections. There are some people who are not connected to any of these infrastructures, and they are the underclass.

The Labor Ladder (65%). This represents “blue-collar” work and is often associated with “working class”, but some people in this distinction earn solidly “middle-class” incomes over $100,000 per year. What defines the Labor ladder is that the work is seen as a commodity, and that there’s rarely a focus on long-term career management. People are assessed based on how hard they work because, in this world, the way to become richer is to work more (not necessarily more efficiently or “smarter”). The Labor ladder is organized almost completely based on income; the more you make (age-adjusted) the higher your position is, and the more likely it is that your work is respected.

Secondary Labor (L4, 30%) is what we call the “working poor”. These are people who earn 1 to 3 times the minimum wage and often have no health benefits. Many work two “part time” jobs at 35 hours per week (so their firms don’t have to provide benefits) with irregular hours. They have few skills and no leverage, so they tend to end up in the worst jobs, and those jobs enervate them so much that it becomes impossible for them to get the skills that would help them advance. The reason for the name Secondary in this class is that they are trapped in the “secondary” labor market: jobs originally intended for teenagers and well-off retirees that were never intended to pay a living wage. Wages for this category are usually quoted hourly and between $5 and $15 per hour.

Primary Labor (L3, 20%) is what we tend to associate with “blue-collar” America. If by “average” we mean median, this is the average social class of Americans, although most people would call it working class, not middle. It usually means having enough money to afford an occasional vacation and a couple restaurant meals per month. People in the L3 class aren’t worried about having food to eat, but they aren’t very comfortable either, and an ill-timed layoff can be catastrophic. If the market for their skills collapses, they can end up falling down a rung into L4. When you’re in the Labor category, market forces can propel you up or down, and the market value of “commodity” labor has been going down for a long time. Typical L3 compensation is $20,000 to $60,000 per year.

In the supposed “golden age” of the United States (the 1950s) a lot of people were earning L2 compensation for L3 work. In a time when well-paid but monotonous labor was not considered such a bad thing (to people coming off the Great Depression and World War II, stable but boring jobs were a godsend) this was seen as desirable, but we can’t go back to that, and most people wouldn’t want to. Most Millennials would be bored shitless by the jobs available in that era that our society occasionally mourns losing.

High-skill Labor (L2, 14%) entails having enough income and job security to be legitimately “middle class”. People in this range can attend college courses, travel internationally (but not very often) and send their children to good schools. Plumbers, airline pilots, and electricians are in this category, and some of these people make over $100,000 per year. For them, there must be some barrier to entry into their line of work, or some force keeping pay high (such as unionization). Within the culture of the Labor ladder, these people are regarded highly.

Labor Leadership (L1, 1%) is the top of the Labor ladder, and it’s what blue-collar America tends to associate with success. (The reason they fail to hate “the 1%” is that they think of L1 small business owners, rather than blue-blooded parasites, as “rich people”.) These are people who, often through years of very hard work and by displaying leadership capability, have ascended to an upper-middle-class income. They aren’t usually “managers” (store managers are L2) but small business owners and landlords, while they’re often seen doing the grunt work of their businesses (such as by running the register when all the cashiers call in sick). They can generate passive income from endeavors like restaurant franchises and earn a solidly upper-middle income standing, but culturally they are still part of Labor. This suits them well, because where they excel is at leading people who are in the Labor category.

The Gentry Ladder (23.5%). England had a landed gentry for a while. We have an educated one. Labor defines status based on the market value of one’s commodity work. The Gentry rebels against commoditization with a focus on qualities that might be, from an extensional perspective, irrelevant. They dislike conflict diamonds, like fair-trade coffee, and drive cultural trends. In the 1950s, they were all about suburbia. In 2012, they had the same enthusiasm for returning to the cities. They value themselves not based on their incomes but, much more so, on access to respected institutions: elite universities, leading technology companies, museums and artistic endeavors. Labor aspires to occupational success and organizational leadership, while the Gentry aspires to education and cultural leadership.

Before going further, it’s worth noting that the typical socioeconomic ordering would have each Gentry level two levels above the corresponding Labor level in social standing. Thus, G1 > G2 > (G3 ~= L1) > (G4 ~= L2) > L3 > L4.

Transitional Gentry (G4, 5%) is the lowest rung of the Gentry ladder. Typically, I think of community colleges when trying to explain G4. It’s the class of people who are coming into the Gentry, usually from L2, and most people in it are looking to attain G3 (and many do). Since the Gentry is defined by education, culture, and cultural influence, earning a four-year degree (which about 20% of American adults have) will usually put a person solidly into G3.

Mobility between G4 and L2 is common, and G4 is a “young people” class, because people who don’t keep abreast of politics, current events, and at least the ”upper-middle-brow” culture of shows like Mad Men [0] tend to return to L2 (which is not an inferior class, but an approximately-equal one with different values). Those who keep up tend to progress to G3.

[0] A couple of people have emailed me to ask why I “knocked” Mad Men. That wasn’t my intention. It’s an excellent show. “Upper-middle-brow” is not panning. I’m lower-middle-brow on a good day.

Primary Gentry (G3, 16%) is what Americans think of as the cultural “upper-middle class”. They have four-year college degrees and typically have professional jobs of middling autonomy and above-average income, but usually not leadership positions. Incomes in this class vary widely (in part, because the Gentry is not defined by income) but generally fall between $30,000 and $200,000 per year. People in this class tend to be viewed as taste-setters by Labor but are viewed as gauche by the higher-ranking G1 and G2 classes.

High Gentry (G2, 2.45%) tend to come from elite colleges and traditionally gravitated toward “junior executive” roles in medium-sized companies, innovative startups, management consultancies, and possibly investment banking (which facilitates the G2-E4 transition). But G2′s wouldn’t be caught dead in jobs that seem perfectly fine to G3′s, which they view (often rightly) to be dead ends. Having interesting, respected work is important to G2′s. To a G2, being a college professor, scientist, entrepreneur, or writer are desirable jobs. Creative control of work is important to G2′s, although not all are able to get it (because creative jobs are so rare). David Brooks’s Bobos in Paradise captured well the culture of G2′s in that time. Members of this social class aggressively manage their careers to get the most out (in terms of intellectual and financial reward) of their careers, but what they really want is enough success and money to do what they really value, which is to influence culture.

G2 is my native social class, and probably that of most of my readers.

Cultural Influencers (G1, 0.05%) are the pinnacle of the Gentry. Jon Stewart is a classic example. He probably makes a “merely” upper-middle-class income working for the notoriously cheap Comedy Central, but he has the most well-regarded members of the intelligentsia on his show every night. For G1, I’m not talking about “celebrities”. Celebrities are a bizarre and tiny category that mixes all three ladders (I’d argue that they’re the upper tier of L1; most lack the power of Elites and the refinement of the Gentry). Rather, I’m talking about people who are widely recognized as smart, knowledgeable, creative, and above all, interesting. They tend also to have access to other interesting people. G1′s are not “famous” in the celebrity sense, and most of them aren’t that rich. I’d guess that their incomes vary mostly from $100,000 to $1 million per year, which is low for a social class that is so difficult to enter (much harder than E4, and possibly E3, to get into).

It’s quite likely that G1 is expanding, and it was probably much smaller in the past. The internet is allowing more people to become well-known and have some degree of cultural influence. Many bloggers have entered G1 without relying on established institutions such as publishers or universities (which used to be the only way). That said, G1 requires reliability in attention; people having their 15 minutes don’t count.

The Elite Ladder (1.5%). This is an infrastructure “at the top of society”, but many of the people it includes are in many ways nowhere near the top. People complain about “the 1 percent”, but the reality is that most of that top 1.0% are nowhere near controlling positions within society.

Not all of the Elite are in the top 1% for income, but most will have the opportunity to be. The Elite includes everyone from billionaires to out-of-college investment banking analysts (who earn a middle-class income in one of the most expensive cities on the planet). What they have in common is that they are exerting themselves toward ownershipLabor provides the work and values effort and loyalty. The Gentry provides culture and it values education and creativity. The Elite owns things and values control and establishment.

As with the Gentry and Labor, when comparing these ladders, one should consider an Elite rung to be two levels above the corresponding Gentry rung, so in terms of social standing, E1 > E2 > (E3 ~= G1) > (E4 ~= G2) > G3 > G4.

The Strivers (E4, 0.5%) are another transitional class that is generally for young people only. They aren’t actually Elite, but they might, if lucky, move into E3. Junior investment bankers, law firm associates, and young startup entrepreneurs are in this category. They’re trying to “break in” to something rich and successful. If they get in, they’ll become solid E3. If they fail in doing so, they usually return to G2: upper-middle-class professionals not strongly bound to the Elite infrastructure. G2 is usually a happier place than E4, but E3′s and E4′s tend to deride this transition. In startups, a business move favoring this step (toward G1-2 upper-middle-class stability) is derided as a “lifestyle business”.

Elite Servants (E3, 0.8%) are the law-firm partners and senior investment bankers and corporate executives that might be called the “working rich” and they comprise what was once called the “white-shoe” culture. They’re quite well-off, as far as servants go, often earning incomes from $200,000 to $5 million per year, but their social standing is conditional. They serve the rich, and the rich have to keep finding them useful for them to maintain their place. It’s not an enviable place to be, because the social expectations associated with maintaining E3 status require high spending, and even the extremely well-compensated ($1 million per year and up) E3′s rarely have the savings to survive more than a year or two without a job, because of the need to maintain connections. E3′s tend to have as many money problems as people in the lower social classes. E3′s also suffer because they live in a “small world” society driven by reputation, long-standing grudges and often petty contempt. E3′s still get fired– a lot, because the pretense that justifies E3-level status (of a large-company “executive”) requires leadership and many don’t have it– and when it happens to them, they can face years during which they can’t find appropriate employment.

People tend to think of face leaders (politicians and CEOs) as belonging to a higher social class, but most are E3. If they were higher, they wouldn’t have to work so hard to be rich. Examining our most recent presidents, Barack Obama is G1, the George Bushes were E2, Bill Clinton was E3, and Reagan was in the celebrity category that is a hybrid of E3 and L1. John Kennedy was E2, while Lyndon Johnson was L1. Most CEOs, however, are strictly E3, because CEOs are “rubber gloves” that are used for dirty work and thrown aside if they get too filthy. There’s too much reputation risk involved in being a corporate CEO for an E2 to want the job under most circumstances.

National Elite (E2, 0.19%) are what most Americans think of as “upper class” or “old money”. They have Roman Numerals in their names, live in the Hamptons (although they’ve probably stopped using “summer” as a verb now that “the poors” know about it) and their families have attended Ivy League colleges for generations. They’re socially very well connected and have the choice not to work, or the choice to work in a wide array of well-compensated and highly-regarded jobs. Rarely do they work full time under traditional employment terms– never as subordinates, sometimes as executives in an apprentice role, often in board positions or “advisory” roles. It’s uncommon that an E2 will put a full-time effort into anything, because their objective with work is to associate their names with successful institutions, but not to get too involved.

To maintain E2 status, being wealthy is required. It takes about $500,000 per year, after tax, in income at a minimum. However, it’s not hard for a person with E2 status and connections to acquire this, even if the family money is lost. The jobs that E3′s regard as the pinnacle of professional achievement (the idea that such a notion as “professional achievement” exists is laughable to them; paid full-time work is dishonorable from an E2 perspective) are their safety careers.

Global Elite (E1, ~60,000 people worldwide, about 30% of those in the U.S.) are a global social class, and extremely powerful in a trans-national way. These are the very rich, powerful, and deeply uncultured barbarians from all over the world who start wars in the Middle East for sport, make asses of themselves in American casinos, rape ski bunnies at Davos, and run the world. Like the Persian army in 300, they come from all over the place; they’re the ugliest and most broken of each nation. They’re the corporate billionaires and drug kingpins and third-world despots and real estate magnates. They’re not into the genteel, reserved “WASP culture” of E2′s, the corporate earnestness and “white shoe” professionalism of E3′s, or the hypertrophic intellectualism and creativity of G1′s and G2′s. They are all about control, and on a global scale. To channel Heisenberg, they’re in the empire business. They aren’t mere management or even “executives”. They’re owners. They don’t care what they own, or what direction the world takes, as long as they’re on top. They almost never take official executive positions within large companies, but they make a lot of the decisions behind the scenes.

Unlike the National Elite, who tend toward a cultural conservatism and a desire to preserve certain traits that they consider necessary to national integrity, the Global Elite doesn’t give a shit about any particular country. They’re fully multinational and view all the world’s political nations as entities to be exploited (like everything else). They foster corruption and crime if it serves their interests, and those interests are often ugly. Like Kefka from Final Fantasy VI, their reason for living is to create monuments to nonexistence.

For the other social classes, there’s no uniform moral assumption that can apply. G1′s are likable and often deserving cultural leaders, but sometimes foolish, overrated, incompetent, infuriatingly petty, and too prone to groupthink to deserve their disproportionate clout. G2′s tend to have the best (or at least most robust) taste, because they don’t fall into G1 self-referentiality, but can be just as snooty and cliquish. As “pro-Gentry” as I may seem, it’s a massive simplification to treat that set as entirely virtuous. Likewise, the lower elite ranks (E2, E3, E4) also have their mix of good and bad people. There are E2′s who want to live well and decently, E3′s trying to provide for their families, and E4′s trying to get in because they were brought up to climb the ladder. On the other hand, E1 is pretty much objectively evil, without exceptions. There are decent people who are billionaires, so there’s no income or wealth level at which 100% objective evil becomes the norm. But if you climb the social ladder, you get to a level at which it’s all cancer, all the way up. That’s E1. Why is it this way? Because the top end of the world’s elite is a social elite, not an economic one, and you don’t get deep into an elevated social elite unless you are very simliar to the center of that cluster, and for the past 10,000 years the center of humanity’s top-of-the-top cluster has always been deep, featureless evil: people who burn peasants’ faces off because it amuses them. Whether you’re talking about a real person like Hitler, Stalin, Erik Prince, Osama bin Laden, or Kissinger, or a fictional example like The Joker, Kefka, Walter White, or Randall Flagg; when you get to the top of society, it’s always the same guy. Call it The Devil, but what’s scary is that it needs (and has) no supernatural powers; it’s human, and while one its representatives might get knocked off, another one will step up.

Ladder conflict. What does all this mean? How do these ladders interrelate? Do these three separate social class structures often find themselves at odds and fight? Can people be part of more than one?

What I’ve called the Labor, Gentry, and Elite “ladders” can more easily be described as “infrastructures”. For Labor, this infrastructure is largely physical and the relevant connection is knowing how to use that physical device or space, and getting people to trust a person to competently use (without owning, because that’s out of the question for most) these resources. For the Gentry, it’s an “invisible graph” of knowledge and education and “interestingness”, comprised largely of ideas. For the Elite, it’s a tight, exclusive network centered on social connections, power, and dominance. People can be connected to more than one of these infrastructures, but people usually bind more tightly to the one of higher status, except when at the transitional ranks (G4 and E4) which tend to punt people who don’t ascend after some time. The overwhelmingly high likelihood is that a person is aligned most strongly to one and only one of these structures. The values are too conflicting for a person not to pick one horse or the other.

I’ve argued that the ladders connect at a two-rung difference, with L2 ~ G4, L1 ~ G3, G2 ~ E4, and G1 ~ E3. These are “social equivalencies” that don’t involve a change in social status, so they’re the easiest to transitions to make (in both directions). They represent a transfer from one form of capital to another. A skilled laborer (L2) who begins taking night courses (G4) is using time to get an education rather than more money. Likewise, one who moves from the high gentry (G2) to a 90-hour-per-week job in private wealth management (E4) is applying her refined intellectual skills and knowledge to serving the rich, in the hope of making the connections to become one of them.

That said, these ladders often come into conflict. The most relevant one to most of my readers will be the conflict between the Gentry and the Elite. The Gentry tends to be left-libertarian and values creativity, individual autonomy, and free expression. The Elite tends toward center-right authoritarianism and corporate conformity, and it views creativity as dangerous (except when applied to hiding financial risks or justifying illegal wars). The Gentry believes that it is the deserving elite and the face of the future, and that it can use culture to engineer a future in which its values are elite; while the upper tier of the Elite finds the Gentry pretentious, repugnant, self-indulgent, and subversive. The relationship between the Gentry and Elite is incredibly contentious. It’s a cosmic, ubiquitous war between the past and the future.

Between the Gentry and Labor, there is an attitude of distrust. The Elite has been running a divide-and-conquer strategy between these two categories for decades. This works because the Elite understands (and can ape) the culture of the Gentry, but has something in common with Labor that sets the categories apart from the Gentry: a conception of work as a theater for masculine dominance. This is something that the Elite and Labor both believe in– the visceral strength and importance of the alpha-male in high-stakes gambling settings such as most modern work– but that the Gentry would rather deny. Gender is a major part of the Elite’s strategy in turning Labor against the Gentry: make the Gentry look effeminate. That’s why “feminist” is practically a racial slur, despite the world desperately needing attention to women’s political equality, health and well-being (that is, feminism).

The Elite also uses the Underclass in a different process: the Elite wants Labor think the Gentry intends to conspire with the Underclass to dismantle Labor values and elevate these “obviously undeserving” people to, at least, the status of Labor if not promoted above them. They exploit fear in Labor. One might invoke racism and the “Southern strategy” in politics as an example of this, but the racial part is incidental. The Elite don’t care whether it’s blacks or Latinos or “illigals” or red-haired people or homosexuals (most of whom are not part of the Underclass) that are being used to frighten Labor into opposing and disliking the Gentry; they just know that the device works and that it has pretty much always worked.

The relationship between the Gentry and Elite is one of open rivalry, and that between the Gentry and Labor is one of distrust. What about Labor and the Elite? That one is not symmetric. The Elite exploit and despise Labor as a class comprised mostly of “useful idiots”. How does Labor see the Elite? They don’t. The Elite has managed to convince Labor that the Gentry (who are open about their cultural elitism, while the Elite hides its social and economic elitism) is the actual “liberal elite” responsible for Labor’s misery over the past 30 years. In effect, the Elite has constructed an “infinity pool” where the Elite appears to be a hyper-successful extension of Labor, lumping these two disparate ladders into an “us” and placing the Gentry and Underclass into “them”.

Analysis of current conflict.

Despite its upper ranks being filled by people who are effectively thugs, the Elite isn’t entirely evil. By population, most of them are merely E3 and E4 stewards with minimal decision-making power, and a lot of those come from (and return to) the Gentry and maintain those values. On the other hand, Elite values tend to be undesirable, because at that structure’s pinnacle are the E1 crime bosses. There are good people within the Elite, even though the Elite itself is not good.

For virtue, the Gentry does better. I don’t want to fall into the American fallacy of conflating “middle class” with virtue, and there are some awful and good people in all social classes, but I think that the Gentry is a more inclusive and reflective elite– one of ideas and values, not based on exclusivity.

One Gentry stronghold for a long time has been high technology, a meritocracy where skill, know-how, and drive enabled a person to rise to technical leadership of increasing scope and eventually business leadership and entrepreneurship. This created the engineering culture of Hewlett-Packard (before Fiorina) and the “Don’t Be Evil” mantra of Google. This is Gentry culture asserting itself. Be ethical, seek positive-sum outcomes, and win by being great rather than by harming, robbing, or intimidating others. It’s not how business is practiced in most of the world– zero-sum thuggery is a lot more common– but it’s how great businesses are made. This weird world in which self-made success was regarded higher than entrenchment, symbolized in Silicon Valley, enabled people from the Gentry to become very rich and Gentry ideas to establish lasting success in business.

What has made America great, especially from 1933 until now, has been the self-assertion of the Gentry following the defeat of the Elite. The first half of the American Era (1933 to 1973) utterly emasculated the Elite. Their rapacious greed and world-fucking parasitism was repaid with 90-percent tax rates, and they told to consider themselves lucky that it wasn’t full-on socialism (or a violent revolution in which they all died, Paris-1793-style). The so-called “WASP culture” of the E2 class derives many of its norms from the paranoia of that period (when the global elite was very small, and they were the “robber baron” elite). For example, the demand that a house not be visible from the road comes from a time in which that was physically dangerous. This four-decade curtailment of the American Elite, and the more resounding destruction of the European ones, was one of the best things that ever happened to the world. It made the golden age of Silicon Valley possible.

There are a lot of reasons why this “golden age” of a disempowered Elite was able to occur, but World War II was the biggest of all of them. Future historians will probably regard the two World Wars as one monstrous conflict, with a period of crippling, worldwide economic depression between them. Few disagree with the claim, for example, that the resolution of the First World War led inexorably to the evils of totalitarianism and the Second of these wars. This giant and largely senseless conflict’s causes seem complex– historians are still debating World War I’s inception– but the short version is that the world’s Elites did that. There was a 30-year period of war, famine, poverty, racial pogroms, and misery that existed largely because a network of high-level obligations and horrendous ideas (especially the racism used to justify colonialism, which benefitted the rich of these societies enormously, but sent the poor to die in unjust wars, contract awful diseases for which they had no immunity, and commit atrocities) set the conditions up. After about a hundred million deaths and thirty tears of war, societies finally decided, “No More”. They dismantled their Elites vigorously, North American and European nations included. This became the “golden age” of the educated Gentry. In the U.S. (for which the 1950s were a decade of prosperity; in Europe, it was a period of rebuilding and not very prosperous) it was also the “golden age of the middle class”.

However, the Elite has brought itself back to life. This Gilded Age isn’t as bad as the last one, but it’s heading that way. It started in the late 1970s when the U.S. fell in love again with elitism: Studio 54, cocaine– a drug that captures the personality of that cultural change well, because its effect is to flood the brain with dopamine, causing extreme arrogance– and “trickle-down economics”.

Assessing the present state of conflict requires attention to what each party wants. What does the Gentry want? The Gentry has a strange, love-hate relationship with capitalism. Corporations are detested (even more than they deserve) by this class and most people in the Gentry want the U.S. to look more like Europe: universal healthcare, decent vacation allotments, and cheap, ecologically sound high-speed trains. This might give the impression of a socialist bent, and that impression’s not wrong. Yet their favorite places are New York (the center of capitalism) and Silicon Valley (also fiercely capitalistic). Although left-leaning, the Gentry are strong champions for non-corporate capitalism. There is no contradiction here. European social democracies have also managed to create hybrid systems that combine the safety and infrastructure of socialism with the innovation and individual liberty of capitalism: the best of both worlds.

For a contrast, what the Elite has been pushing for is the worst of both worlds, at least for average people. The truth of corporate “capitalism” is that it provides the best of both systems (socialism and capitalism) for the Elite and the worst of both for everyone else. It’s a welfare state in which only very well-connected people are citizens, it favors command economies (which are what most corporations are, internally) and it stifles the positive-sum innovation that is capitalism’s saving grace. The upper tier of society wants social stability for themselves (to stay in and keep others out) but they favor extreme economic variability (also known as “inequality”) because it gives them more opportunities to exploit their social status for economic gain (read: private-sector corruption).

Air travel in the contemporary U.S. is an illustrative example of this “worst of both worlds” scenario: the pricing is erratic, unreasonable, and even a bit mean-spirited, which shows the volatility of capitalism, while the low quality of service and the abysmal morale of the industry feel like direct transplants from the Soviet Union.

The future.

A major battle is coming, with all three of these categories (Labor, Gentry, and Elite) involved. The Gentry and the Elite are at fundamental opposites on the type of society they want to see and, for decades, the Elite has been winning, but their victories are becoming harder to win as technology opens up the world. Labor might seem like a nonparticipant in the ideological battles, but they comprise most of the casualties, and they’ve seen shells land in their backyard (especially if they live in Detroit). Not only are they losing their jobs and social status, but their communities have been demolished.

Something else is happening, which is relevant both in a macrohistorical sense and to the U.S. in 2012. One way to divide human history is into three eras: pre-Malthusian, trans-Malthusian, and post-Malthusian. I refer, of course, to the prediction of Thomas Malthus, early in the Industrial Revolution, that population growth in contemporary societies would lead to a catastrophe because population grew exponentially, while economic growth was linear. He was wrong. Economic growth has always been exponential, but for most of human history it has had a very slow (under 1% per year) exponential curve– slower than population growth, and slow enough to look linear. His mathematical model was wrong, but his conclusion– that population grows until it is checked (i.e. people die) by disease, famine, and war– was true in nature and of almost every human society from the dawn of time to about 1800. He was wrong that it would afflict England and the richer European countries in the mid-19th century– because the Industrial Revolution accelerated economic growth enough to prevent a global Malthusian crunch. On the other hand, there were local Malthusian catastrophes. Ireland endured severe poverty and oppression, colonialism was deeply horrible and did, in fact, represent many of the vices Malthus warned about.

The world was pre-Malthusian when societies were doomed to grow faster in population than in their ability to support it. This led, over the millennia, to certain assumptions about society that can be categorized as “zero-sum”. For one tribe to take care of its young, another tribe must lose wealth or be destroyed. For English to be rich, Irish must starve. For Southern whites to live well, blacks must be slaves. For capital to be profitable, labor must be exploited. If Catholic Spain has one colony, Protestant England must have more. For the German people to have “lebensraum”, Central European countries must be invaded and their inhabitants killed. “Medieval” horrors were an artifact of the Malthusian reality of that time, but such atrocities continued even as the long-standing Malthusian inequality (population growth being greater than economic growth) reversed itself.

We are now in a trans-Malthusian state, and have been for about two hundred years. Global economic growth is now over 4% per year, which is the fastest it has ever been, and there’s no sign of it slowing down. The world has a lot of problems, and there are pockets of severe decay, corruption, and poverty; but on the whole, it’s becoming a better place, and at an accelerating (hyper-exponential) rate. The world is no longer intrinsically Malthusian, but pre-Malthusian attitudes still dominate, especially at the pinnacles of our most successful societies. This shouldn’t be shocking, because the very traits (especially, low empathy and greed) that would be required to succeed in a zero-sum world are still strong in our upper classes. This legacy won’t go away overnight. The people haven’t changed very much. Pre-Malthusian fearmongering is also very effective on less intelligent people, who haven’t figured out that the world has changed in the past two hundred years. They still believe in the zero-sum world wherein, if “illigal” immigrants “take all the jobs”, middle-class white people will starve.

The trans-Malthusian state is, I believe, intrinsically more volatile than a pre-Malthusian one. Technology is causing the job market to change faster, but this paradoxically makes individual spells of unemployment longer. Another thing is that we’re seeing something that pre-Malthusian economies didn’t have to worry about: economic depressions. This is not to romanticize pre-Malthusian life or societies. They would experience famines, wars, and disease epidemics that would kill far more people than any economic depression, but those had natural or historical causes that were not intrinsic and desirable. We’ve been able to eliminate most of these evils from life without losing anything in the process. These depressions, in my view, come from economic progress itself (and moreover, our inability to manage growth in a way that distributes prosperity, rather than displacing people). The first quarter of the 20th century saw unprecedented advancement in food production– a good thing, undeniably– which caused agricultural commodities to drop in price. This caused small farmers (who could not partake in these advances to the same extent) to fall into poverty. Without the small farmers, towns supported by them weren’t doing well either. Poverty isn’t a “moral medicine” that clears out the bad in society. It doesn’t make people better or harder working. It ruins people. It’s a cancer. It spreads. And it did. Rural poverty was severe in the United States by 1925, before the Depression officially began. Urban sophisticates and elites were OK in 1925, hence this era is remembered as being prosperous. In 1933? Not so much. The cancer had grown. Throughout the 1930s, the rich were terrified of an American communist revolution.

We don’t want another Great Depression, and what’s scary in 2012 is that it seems like what happened to agricultural products in the 1920s is now happening to almost all human labor. We’re outsourcing, automating, and “streamlining”, and all of these changes are fundamentally good, but if we don’t take steps to prevent the collapse of the middle class, we could lose our country. This will almost certainly require innovations that the right wing will decry as “socialism”, but it will also involve techniques (such as crowd-funding and microloans for small businesses) that are far more capitalistic than anything the corporates have come up with.

We are trans– (not post-) Malthusian because we live in a world where scarcity is still in force (although often artificial) and zero-sum mentalities dominate (even though they’re inappropriate to a technological world). If Mexican immigrants “take the jobs”, formerly middle-class white people will be without healthcare. What’s required is to step away from the zero-sum attitude (expressed often in racism) and recognize that no one of any ethnicity, jobless or employed, should be without healthcare. Ever. Technology is great at helping us generate more resources and make more with what we have, and we have to accept that it will “unemploy” people on a regular basis, but the bounty should be distributed fairly, and not hogged by the fortunate while those it renders transiently jobless are allowed to fall into poverty. “Collateral damage” is not acceptable and, if the 1920s and ’30s are illustrative, it can’t be contained. The damage will spread.

What does this have to do with the ladders and their conflict? Labor is a trans-Malthusian social category because it lives in a world that values fair play (a positive-sum, post-Malthusian value) but that is constrained by artificial scarcity. The Elite is pre-Malthusian; they are obsessed with the zero-sum game of social status and the need to keep themselves elevated and others out. The Gentry, although not without its faults, is properly post-Malthusian. Their values (political liberalism, individual freedom, enough socialism to ensure a just society, positive-sum outlook, and a positive view of technology) represent what it will take to evolve toward a post-Malthusian state.

Tech companies: open allocation is your only real option.

I wrote, about a month ago, about Valve’s policy of allowing employees to transfer freely within the company, symbolized by placing wheels under the desk (thereby creating a physical marker of their superior corporate culture that makes traditional tech perks look like toys) and expecting employees to self-organize. I’ve taken to calling this seemingly radical notion open allocation– employees have free rein to work on projects as they choose, without asking for permission or formal allocation– and I’m convinced that, despite seeming radical, open allocation is the only thing that actually works in software. There’s one exception. Some degree of closed allocation is probably necessary in the financial industry because of information barriers (mandated by regulators) and this might be why getting the best people to stay in finance is so expensive. It costs that much to keep good people in a company where open allocation isn’t the norm, and where the workflow is so explicitly directed and constrained by the “P&L” and by justifiable risk aversion. If you can afford to give engineers 20 to 40 percent raises every year and thereby compete with high-frequency-trading (HFT) hedge funds, you might be able to retain talent under closed allocation. If not, read on.

Closed allocation doesn’t work. What do I mean by “doesn’t work”? I mean that, as things currently go in the software industry, most projects fail. Either they don’t deliver any business value, or they deliver too little, or they deliver some value but exert long-term costs as legacy vampires. Most people also dislike their assigned projects and put minimal or even negative productivity into them. Good software is exceedingly rare, and not because software engineers are incompetent, but because when they’re micromanaged, they stop caring. Closed allocation and micromanagement provide an excuse for failure: I was on a shitty project with no upside. I was set up to fail. Open allocation blows that away: a person who has a low impact because he works on bad projects is making bad choices and has only himself to blame.

Closed allocation is the norm in software, and doesn’t necessarily entail micromanagement, but it creates the possibility for it, because of the extreme advantage it gives managers over engineers. An engineer’s power under closed allocation is minimal: his one bit of leverage is to change jobs, and that almost always entails changing companies. In a closed-allocation shop, project importance is determined prima facie by executives long before the first line of code is written, and formalized in magic numbers called “headcount” (even the word is medieval, so I wonder if people piss at the table, at these meetings, in order to show rank) that represent the hiring authority (read: political strength) of various internal factions. The intention of headcount numbers is supposed to be to prevent reckless hiring by the company on the whole, and that’s an important purpose, but their actual effect is to make internal mobility difficult, because most teams would rather save their headcount for possible “dream hires” who might apply from outside in the future, rather than risk a spot on an engineer with an average performance review history (which is what most engineers will have). Headcount bullshit makes it nearly impossible to transfer unless (a) someone likes you on a personal basis, or (b) you have a 90th-percentile performance review history (in which case you don’t need a transfer). Macroscopic hiring policies (limits, and sometimes freezes) are necessary to prevent the company from over-hiring, but internal headcount limits are one of the worst ideas ever. If people want to move, and the leads of those projects deem them qualified, there’s no reason not to allow this. It’s good for the engineers and for the projects that have more motivated people working on them.

When open allocation is in play, projects compete for engineers, and the result is better projects. When closed allocation is in force, engineers compete for projects, and the result is worse engineers. 

When you manage people like children, that’s what they become. Traditional, 20th-century management (so-called “Theory X”) is based on the principle that people are lazy and need to be intimidated into working hard, and that they’re unethical and need to be terrified of the consequences of stealing from the company, with a definition of “stealing” that includes “poaching” clients and talent, education on company time, and putting their career goals over the company’s objectives. In this mentality, the only way to get something decent out of a worker is to scare him by threatening to turn off his income– suddenly and without appeal. Micromanagement and Theory X are what I call the Aztec Syndrome: the belief in many companies that if there isn’t a continual indulgence in sacrifice and suffering, the sun will stop rising.

Psychologists have spent decades trying to answer the question, “Why does work suck?” The answer might be surprising. People aren’t lazy, and they like to work. Most people do not dislike the activity of working, but dislike the subordinate context (and closed allocation is all about subordination). For example, peoples’ minute-by-minute self-reported happiness tends to drop precipitously when they arrive at the office, and rise when they leave it, but it improves once they start actually working. They’re happier not to be at an office, but if they’re in an office, they’re much happier when working than when idle. (That’s why workplace “goofing off” is such a terrible idea; it does nothing for office stress and it lengthens the day.) People like work. It’s part of who we are. What they don’t like, and what enervates them, is the subordinate context and the culturally ingrained intimidation. This suggests the so-called “Theory Y” school of management, which is that people are intrinsically motivated to work hard and do good things, and that management’s role is to remove obstacles.

Closed allocation is all about intimidation: if you don’t have this project, you don’t have a job. Tight headcount policies and lockout periods make internal mobility extraordinarily difficult– much harder than getting hired at another company. The problem is that intimidation doesn’t produce creativity and it erodes peoples’ sense of ethics (when people are under duress, they feel less responsible for what they are doing). It also provides the wrong motivation: the goal becomes to avoid getting fired, rather than to produce excellent work.

Also, if the only way a company can motivate people to do a project is to threaten to turn off a person’s income, that company should really question whether that project’s worth doing at all.

Open allocation is not the same thing as “20% time”, and it isn’t a “free-for-all”. Open allocation does not mean “everyone gets to do what they want”. A better way to represent it is: “Lead, follow, or get out of the way” (and “get out of the way” means “leave the company”). To lead, you have to demonstrate that your product is of value to the business, and convince enough of your colleagues to join your project that it has enough effort behind it to succeed. If your project isn’t interesting and doesn’t have business value, you won’t be able to convince colleagues to bet their careers on it and the project won’t happen. This requires strong interpersonal skills and creativity. Your colleagues decide, voting with their feet, if you’re a leader, not “management”. If you aren’t able to lead, then you follow, until you have the skill and credibility to lead your own project. There should be no shame in following; that’s what most people will have to do, especially when starting out.

“20% time” (or hack days) should be exist as well, but that’s not what I’m talking about. Under open allocation, people are still expected to show that they’ve served the needs of the business during their “80% time”. Productivity standards are still set by the projects, but employees choose which projects (and sets of standards) they want to pursue. Employees unable to meet the standards of one project must find another one. 20% time is more open, because it entails permission to fail. If you want to do a small project with potentially high impact, or to prove that you have the ability to lead by starting a skunk-works project, or volunteer, take courses, or attend conferences on company time, that’s what it’s for. During their “80% time”, people are still expected to lead or follow on a project with some degree of sanction. They can’t just “do whatever they want”.

Four types of projects. The obvious question that open allocation raises is, “Who does the scut work?” The answer is simple: people do it if they will get promoted, formally or informally, for doing it, or if their project directly relies on it. In other words, the important but unpleasant work gets done, by people who volunteer to do it. I want to emphasize “gets done”. Under closed allocation, a lot of the unpleasant stuff never really gets done well, especially if unsexy projects don’t lead to promotions, because people are investing most of their energy into figuring out how to get to better projects. The roaches are swept under the carpet, and people plan their blame strategies months in advance.

If we classify projects into four categories by important vs. unimportant, and interesting vs. unpleasant, we can assess what happens under open allocation. Important and interesting projects are never hard to staff. Unimportant but interesting projects are for 20% time; they might succeed, and become important later, but they aren’t seen as critical until they’re proven to have real business value, so people are allowed to work on them but are strongly encouraged to also find and concentrate on work that’s important to the business. Important but unpleasant projects are rewarded with bonuses, promotions, and the increased credibility accorded to those who do undesirable but critical work. These bonuses should be substantial (six and occasionally even seven figures for critical legacy rescues); if the project is actually important, it’s worth it to actually pay. If it’s not, then don’t spend the money. Unimportant and unpleasant projects, under open allocation, don’t get done. That’s how it should be. This is the class of undesirable, “death march” projects that closed-allocation nurtures (they never go away, because to suggest they aren’t worth doing is an affront to the manager that sponsors them and a career-ending move) but that open allocation eliminates. Under open allocation, people who transfer away from these death marches aren’t “deserters”. It’s management’s fault if, out of a whole company, no one wants to work on the project. Either the project’s not important, or they didn’t provide enough enticement.

Closed allocation is irreducibly political. Compare two meanings of the three-word phrase, “I’m on it”. In an open-allocation shop, “I’m on it” is a promise to complete a task, or at least to try to do it. It means, “I’ve got this.” In a closed-allocation shop, “I’m on it” means “political forces outside of my control require me to work only on this project.”

People complain about the politics at their closed-allocation jobs, but they shouldn’t, because it’s inevitable that politics will eclipse the matter of actually getting work done. It happens every time, like clockwork. The metagame becomes a million times more important than actually sharpening pencils or writing code. If you have closed allocation, you’ll have a political rat’s nest. There’s no way to avoid it. In closed allocation, the stakes of project allocation are so high that people are going to calculate every move based on future mobility. Hence, politics. What tends to happen is that a four-class system emerges, resulting from the four categories of work that I developed above. The most established engineers, who have the autonomy and leverage to demand the best projects, end up in the “interesting and important” category. They get good projects the old-fashioned way: proving that they’re valuable to the company, then threatening to leave if they aren’t reassigned. Engineers who are looking for promotions into managerial roles tend to take on the unpleasant but important work, and attempt to coerce new and captive employees into doing the legwork. The upper-middle class of engineers can take the interesting but unimportant work, but it tends to slow their careers if they intend to stay at the same company (they learn a lot, but they don’t build internal credibility). The majority and the rest, who have no significant authority over what they work on, get a mix, but a lot of them get stuck with the uninteresting, unimportant work (and closed-allocation shops generate tons of that stuff) that exists for reasons rooted in managerial politics.

What are the problems with open allocation? The main issue with open allocation is that it seems harder to manage, because it requires managers to actively motivate people to do the important but unpleasant work. In closed allocation, people are told to do work “because I said so”. Either they do it, or they quit, or they get fired. It’s binary, which seems simple. There’s no appeal process when people fail projects or projects fail people– and no one ever knows which happened– and extra-hierarchical collaboration is “trimmed”, and efforts can be tracked by people who think a single spreadsheet can capture everything important about what is happening in the company. Closed-allocation shops have hierarchy and clear chains of command, and single-points-of-failure (because a person can be fired from a whole company for disagreeing with one manager) out the proverbial wazoo. They’re Soviet-style command economies that somehow ended up being implemented within supposedly “capitalist” companies, but they “appear” simple to manage, and that’s why they’re popular. The problem with closed allocation policies is that they lead to enormous project failure rates, inefficient allocation of time, talent bleeds, and unnecessary terminations. In the long term, all of this unplanned and surprising garbage work makes the manager’s job harder, more complex, and worse. When assessing the problems associated with open allocation (such as increased managerial complexity) it’s important to consider that the alternative is much worse.

How do you do it? The challenging part of open allocation is enticing people to do unpleasant projects. There needs to be a reward. Make the bounty too high, and people come in with the wrong motivations (capturing the outsized reward, rather than getting a fair reward while helping the company) and the perverse incentives can even lead to “rat farming” (creating messes in the hopes of being asked to repair them at a premium). Make it too low, and no one will do it, because no one wise likes a company well enough to risk her own career on a loser project (and part of what makes a bad project bad is that, absent recognition, it’s career-negative to do undesirable work). Make the reward too monetary and it looks bad on the balance sheet, and gossip is a risk: people will talk if they find out a 27-year-old was paid $800,00o in stock options (note: there had better be vesting applied) even if it’s justified in light of the legacy dragon being slain. Make it too career-focused and you have people getting promotions they might not deserve, because doing unpleasant work doesn’t necessarily give a person technical authority in all areas. It’s hard to get the carrot right. The appeal of closed allocation is that the stick is a much simpler tool: do this shit or I’ll fire you.

The project has to be “packaged”. It can’t be all unpleasant and menial work, and it needs to be structured to involve some of the leadership and architectural tasks necessary for the person completing it to actually deserve the promised promotion. It’s not “we’ll promote you because you did something grungy” but “if you can get together a team to do this, you’ll all get big bonuses, and you’ll get a promotion for leading it.” Management also needs to have technical insight on hand in order to do this: rather than doing grunt work as a recurring cost, kill it forever with automation.

An important notion in all this is that of a committed project. Effectively, this is what the executives should create if they spot a quantum of work that the business needs but that is difficult and does not seem to be enjoyable in the estimation of the engineers. These shouldn’t be created lightly. Substantial cash and stock bonuses (vested, over the expected duration of the project) and promotions are associated with completing these projects, and if more than 25% of the workload is committed projects, something’s being done wrong. A committed project offers high visibility (it’s damn important; we need this thing) and graduation into a leadership role. No one is “assigned” to a committed project. People “step up” and work on them because of the rewards. If you agree to work on a committed project, you’re expected to make a good-faith effort to see it through for an agreed-upon period of time (typically, a year). You do it no matter how bad it gets (unless you’re incapable) because that’s what leadership is. You should not “flake out” because you get bored. Your reputation is on the line.

Companies often delegate the important but undesirable work in an awkward way. The manager gets a certain credibility for taking on a grungy project, because he’s usually at a level where he has basic autonomy over his work and what kinds of projects he manages. If he can motivate a team to accomplish it, he gets a lot of credit for taking on the gnarly task. The workers, under closed allocation, get zilch. They were just doing their jobs. The consequence of this is that a lot of bodies end up buried by people who are showing just enough presence to remain in good standing, but putting the bulk of their effort into moving to something better. Usually, it’s new hires without leverage who get staffed on these bad projects.

I’d take a different approach to committed projects. Working on one requires (as the name implies) commitment. You shouldn’t flake out because something more attractive comes along. So only people who’ve proven themselves solid and reliable should be working on (much less leading) them. To work on one (beyond a 20%-time basis) you have to have been at the company for at least a year, senior enough for the leadership to believe that you have the ability to deliver, and in strong standing at the company. Unless hired at senior roles, I’d never let a junior hire take on a committed project unless it was absolutely required– too much risk.

How do you fire people? When I was in school, I enjoyed designing and playing with role-playing systems. Modeling a fantasy world is a lot of fun. Once I developed an elaborate health mechanic that differentiated fatigue, injury, pain, blood loss, and “magic fatigue” (which affected magic users) and aggregated them (determining attribute reductions and eventual incapacitation) in what I considered to be a novel way. One small detail I didn’t include was death, so the first question I got was, “How do you die?” Of course, blood loss and injuries could do it. In a no-magic, medieval world, loss of the head is an incapacitating and irreversible injury, and exsanguination is likewise. However, in a high-magic world, “death” is reversible. Getting roasted, eaten and digested by a dragon might be reversible. But there has to be a possibility (though it doesn’t require a dedicated game mechanic) for a character to actually die in the permanent, create-a-new-character sense of the word. Otherwise there’s no sense of risk in the game: it’s just rolling dice to see how fast you level up. My answer was to leave that decision to the GM. In horror campaigns, senseless death (and better yet, senseless insanity) is part of the environment. It’s a world in which everything is trying to kill you and random shit can end your quest. But in high-fantasy campaigns with magic and cinematic storylines, I’m averse to characters being “killed by the dice”. If the character is at the end of his story arc, or does something inane like putting his head in a dragon’s mouth because he’s level 27 and “can’t be killed”, then he dies for real. Not “0 hit points”, but the end of his earthly existence. But he shouldn’t die because the player is hapless enough to roll 4 “1″s in a row on a d10. Shit happens.

The major problem with “rank and yank” (stack-ranking with enforced culling rates) and especially closed allocation is that a lot of potentially great employees are killed by the dice. It becomes part of the rhythm of the company for good people to get inappropriate projects or unfair reviews, blow up mailing lists or otherwise damage morale when it pisses them off, then get fired or quit in a huff. Yawn… another one did that this week. As I alluded in my Valve essay, this is the Welch Effect: the ones who get fired under rank-and-yank policies are rarely low performers, but junior members of macroscopically underperforming teams (who rarely have anything to do with this underperformance). The only way to enforce closed allocation is to fire people who fail to conform to it, but this also means culling the unlucky whose low impact (for which they may not be at fault) appears like malicious noncompliance.

Make no mistake: closed allocation is as much about firing people as guns are about killing people. If people aren’t getting fired, many will work on what they want to anyway (ignoring their main projects) and closed allocation has no teeth. In closed allocation shops, firings become a way for the company to clean up its messes. “We screwed this guy over by putting him on the wrong project; let’s get rid of him before he pisses all over morale.” Firings and pseudo-firings (“performance improvement plans” and transfer blocks and intentional dead-end allocations) become common enough that they’re hard to ignore. People see them, and that they sometimes happen to good people. And they scare people, especially because the default in non-financial tech companies is to fire quickly (“fail fast”) and without severance. It’s a really bad arrangement.

Do open-allocation shops have to fire people? The answer is an obvious “yes”, but it should be damn rare. The general rule of good firing is: mentor subtracters, fire dividers. Subtracters are good-faith employees who aren’t pulling their weight. They try, but they’re not focused or skilled enough to produce work that would justify keeping them on the payroll. Yet. Most employees start as subtractors, and the amount of time it takes to become an adder varies. Most companies try to set guidelines for how long an employee is allowed to take to become an adder (usually about 6 months). I’d advise against setting a firm timeframe, because what’s important is now how fast a person has learned (she might have had a rocky start) but how fast, and more importantly how well, she can learn.

Subtracters are, except in an acute cash crisis when they must be laid off for business reasons, harmless. They contribute microscopically to the burn rate, but they’re usually producing some useful work, and getting better. They’ll be adders and multipliers soon. Dividers are the people who make whole teams (or possibly the whole company) less productive. Unethical people are dividers, but so are people whose work is of so low quality that messes are created for others, and people whose outsized egos produce conflicts. Long-term (18+ months) subtractors become “passive” dividers because of their morale effects, and have to be fired for the same reason. Dividers smash morale, and they’re severe culture threats. No matter how rich your company is and how badly you may want not to fire people, you have to get rid of dividers if they don’t reform immediately. Dividers ratchet up their toxicity until they are capable of taking down an entire company. Firing can be difficult because many dividers shine as individual contributors (“rock stars”) but taketh away in their effects on morale, but there’s no other option.

My philosophy of firing is that the decision should be made rarely, swiftly, for objective reasons, and with a severance package sufficient to cover the job search (unless the person did something illegal or formally unethical) that includes non-disclosure, non-litigation and non-disparagement. This isn’t about “rewarding failure”. It’s about limiting risk. When you draft “performance improvement plans” to justify termination without severance, you’re externalizing the cost to people who have to work with a divider who’s only going to get worse post-PIP. Companies escort fired employees out of the building, which is a harsh but necessary risk-limiting measure; but it’s insane to leave a PIP’d employee in access for two months. Moreover, when you cold-fire someone, you’re inviting disparagement, gossip, and lawsuits. Just pay the guy to go away. It’s the cheapest and lowest-variance option. Three months of severance and you never see the guy again. Good. Six months and you he speaks highly of you and your company: he had a rocky time, you took care of him, and he’s (probably) better-off now. (If you’re tight on money, which most startups are, stay closer to the 3-month mark. You need to keep expenses low more than you need fired employees to be your evangelists. If you’re really tight, replace the severance with a “gardening leave” package that continues his pay only until he starts his next job.)

If you don’t fire dividers, you end up with something that looks a lot like closed allocation. Dividers can be managers (a manager can only be a multiplier or divider, and in my experience, at least half are dividers) or subordinates, but dividers tend to intimidate. Subordinate passive dividers intimidate through non-compliance (they won’t get anything done) while active dividers either use interpersonal aggression or sabotage to threaten or upset people (often for no personal gain). Managerial (or proto-managerial) dividers tend to threaten career adversity (including bad reviews, negative gossip, and termination) in order to force people to put the manager’s career goals above their own. They can’t motivate through leadership, so they do it using intimidation and (if available) authority, and they draw people into captivity to get done the work they want, without paying for it on a fair market (i.e. providing an incentive to do the otherwise undesirable work). At this point, what you have is a closed-allocation company. What this means is that open allocation has to be protected: you do it by firing the threats.

If I were running a company, I think I’d have a 70% first-year “firing” (by which I mean removal from management; I’d allow lateral moves into IC roles for those who desired to do so) rate for titled managers. By “titled manager”, I mean someone with the authority and obligation to participate in dispute resolution, terminations and promotions, and packaging committed projects. Technical leadership opportunities would be available to anyone who could convince people to follow them, but to be a titled people manager you’d have to pass a high bar. (You’d have to be as good at it as I would be, and for 70 to 80 percent of the managers I’ve observed, I’d do a better job.) This high attrition rate would be offset by a few cultural factors and benefits. First, “failing” in the management course wouldn’t be stigmatized because it would be well-understood that most people either end it voluntarily, or aren’t asked to continue. People would be congratulated for trying out, and they’d still be just as eligible to lead projects– if they could convince others to follow. Second, those who aspired specifically to people-management and weren’t selected would be entitled (unless fully terminated for doing something unethical or damaging) to a six-month leave period in which they’d be permitted to represent themselves as employed. That’s what B+ and A- managers would get– the right to remain as individual contributors (at the same rank and pay) and, if they didn’t want that, a severance offer along with a strong reference if they wished to pursue people management in other companies– but not at this one.

Are there benefits to closed allocation? I can answer this with strong confidence. No, not in typical technology companies. None exist. The work that people are “forced” to do is of such low quality that, on balance, I’d say it provides zero expectancy. In commodity labor, poorly motivated employees are about half as productive as average ones, and the best are about twice as productive. Intimidating the degenerate slackers into bringing themselves up to 0.5x from zero makes sense. In white-collar work and especially in technology, those numbers seem to be closer to -5 and +20, not 0.5 and 2.

You need closed (or at least controlled) allocation over engineers if there is material proprietary information where even superficial details would represent, if divulged, an unacceptable breach: millions of dollars lost, company under existential threat, classified information leaked. You impose a “need-to-know” system over everything sensitive. However, this most often requires keeping untrusted, or just too many people, out of certain projects (which would be designated as committed projects under open allocation). It doesn’t require keeping people stuck on specific work. Full-on closed allocation is only necessary when there are regulatory requirements that demand it (in some financial cases) or extremely sensitive proprietary secrets involved in most of the work– and comments in public-domain algorithms don’t count (statistical arbitrage strategies do).

What does this mean? Fundamentally, this issue comes down to a simple rule: treat employees like adults, and that’s what they’ll be. Investment banks and hedge funds can’t implement total open allocation, so they make up the difference through high compensation (often at unambiguously adult levels) and prestige (which enables lateral promotions for those who don’t move up quickly). On the other hand, if you’re a tiny startup with 30-year-old executives, you can’t afford banking bonuses, and you don’t have the revolving door into $400k private equity and hedge fund positions that the top banks do, so employee autonomy (open allocation) is the only way for you to do it. If you want adults to work for you, you have to offer autonomy at a level currently considered (even in startups) to be extreme.

If you’re an engineer, you should keep an eye out for open-allocation companies, which will become more numerous as the Valve model proves itself repeatedly and all over the place (it will, because the alternative is a ridiculous and proven failure). Getting good work will improve your skills and, in the long run, your career. So work for open-allocation shops if you can. Or, you can work in a traditional closed-allocation company and hope you get (and continue to get) handed good projects. That means you work for (effectively, if not actually) a bank or a hedge fund, and that’s fine, but you should expect to be compensated accordingly for the reduction in autonomy. If you work for a closed-allocation ad exchange, you’re a hedge-fund trader and you deserve to be paid like one.

If you’re a technology executive, you need to seriously consider open allocation. You owe it to your employees to treat them like adults, and you’ll be pleasantly surprised to find that that’s what they become. You also owe it to your managers to free them from the administrative shit-work (headcount fights, PIPs and terminations) that closed allocation generates. Finally, you owe it to yourself; treat yourself to a company whose culture is actually worth caring about.

XWP vs. JAP

The software industry is a fascinating place. As programmers, we have the best and worst job in the world. What we do is so rewarding and challenging that many of us have been doing it for free since we were eight. We’re paid to think, and to put pure logic into systems that effectively do our bidding. And yet, we have (as a group) extremely low job satisfaction. We don’t last long: our half-life is about six years. By 30, most of us have decided that we want to do something else: management, quantitative finance, or startup entrepreneurship. It’s not programming itself that drives us out, but the ways in which the “programmer” job has been restructured out of our favor. It’s been shaved down, mashed, melted and molded into commodity grunt work, except for the top 2 percent or so of our field (for whom as much time is spent establishing that one is, in fact, in the top 2 percent, as is spent working). Most of us have to sit in endless meetings, follow orders that make no sense, and maintain legacy code with profanity such as “VisitorFactoryFactory” littered about, until we move “into management”, often landing in a role that is just as tedious but carries (slightly) more respect.

I’m reaching a conclusion, and it’s not a pleasant one, about our industry and what one has to do to survive it. My definition of “survive” entails progress, because while it’s relatively easy to coast, engineers who plateau are just waiting to get laid off, and will usually find that demand for their (increasingly out of date) skills has declined. Plainly put, there’s a decision that programmers have to make if they want to get better. Why? Because you only get better if you get good projects, and you only get good projects if you know how to play the game. Last winter, I examined the trajectory of software engineers, and why it seems to flat-line so early. The conclusion I’ve come to is that there are several ceilings, three of which seem visible and obvious, and each requires a certain knack to get past it. Around 0.7 to 0.8 there’s the “weed out” effect that’s  rooted in intellectual limitations: inability to grasp pointers, recursion, data in sets, or other intellectual concepts people need to understand if they’re going to be adequate programmers. Most people who hit this ceiling do so in school, and one hopes they don’t become programmers. The next ceiling, which is where the archetypical “5:01″ mediocrities live, is around 1.2. This is where you finish up if you just follow orders, don’t care about “functional programming” because you can’t see how it could possibly apply to your job, and generally avoid programming outside of an office context.

The next ceiling is around 1.8, and it’s what I intend to discuss. The 0.7 ceiling is a problem of inability, and at 1.2 it’s an issue of desire and willingness. There are a lot of programmers who don’t have a strong desire to get any better than average. Average is employable, middle-class, comfortable. That keeps a lot of people complacent around 1.2. The ceiling at 1.8, on the other hand, comes from the fact that it’s genuinely hard to get allocated 1.8+ level work, which usually involves technical and architectural leadership. In most companies, there are political battles that the projects’ originators must fight to get them on the map, and others that engineers must involve themselves in if they want to get on to the best projects. It’s messy and hard and ugly and it’s the kind of process that most engineers hate.

Many engineers at the 1.7 to 1.8 level give up on engineering progress and take this ceiling as a call to move into management. It’s a lot harder to ensure a stream of genuinely interesting work than it is to take a middle management position. The dream is that the managerial position will allow the engineer to allocate the best technical work to himself and delegate the crap. The reality is that he’s lucky if he gets 10 hours per week of coding time in, and that managers who cherry-pick the good work and leave the rest to their subordinates are often despised and therefore ineffective.

This said, there’s an idea here, and it deserves attention. The sudden desire to move into management occurs when engineers realize that they won’t progress by just doing their assigned work, and that they need to hack the project allocation process if they want to keep getting better. Managerial authority seems like the most direct route to this because, after all, it’s managers who assign the projects. The problem with that approach is that managerial work requires an entirely different skill set, and that while this is a valid career, it’s probably not what one should pursue if one wants to get better as a software engineer.

How does one hack project allocation? I’m going to introduce a couple terms. The first is J.A.P.: “Just A Programmer”. There are a lot of people in business who see programming as commodity work: that’s why most of our jobs suck. This is a self-perpetuating cycle: because of such peoples’ attitudes toward programmers, good engineers leave them, leaving them with the bad, and reinforcing their perception that programming is order-following grunt work that needs to be micromanaged or it won’t be done right at all. Their attitude toward the software engineer is that she’s “just a programmer”. Hence the term. There’s a related cliche in the startup world involving MBA-toting “big-picture guys” who “just need a programmer” to do all the technical work in exchange for a tiny sliver of the equity. What they get, in return, is rarely quality.

Worse yet for the business side, commodity programmers aren’t 90 or 70 or 50 percent as valuable as good engineers, but 5 to 10 percent as useful, if that. The major reason for this is that software projects scale horribly in terms of the number of people involved with them. A mediocre engineer might be 20 percent as effective, measured individually, as a good one, but four mediocre engineers will only be about 35 percent (not 100) as effective as a single good engineer.

Good programmers dread the “Just A Programmer” role in which they’re assessed on the quantity of code they crank out rather than the problems they solve and the quality of their solutions, and they avoid such positions especially because commodity-programmer roles tend to attract ineffective programmers, and effective people who have to work with ineffective programmers become, themselves, ineffective.

This said, a 1.8 engineer is not a “commodity programmer”. At this level, we’re talking about people who are probably in the top 2 or 3 percent of the software industry. We’re talking about people who, in a functioning environment, will deliver high-quality and far-reaching software solutions reliably. They can start from scratch and deliver an excellent “full-stack” solution. (In a dysfunctional environment, they’ll probably fail if they don’t quit first.)  The political difficulty, and it can be extreme, lies with the fact that it’s very difficult for a good engineer to reliably establish (especially to non-technical managers, and to those managers’ favorites who may not be technically strong) that she is good. It turns out that, even if it’s true, you can’t say to your boss, “I’m a top-2% engineer and deserve more autonomy and the best projects” and expect good results. You have to show it, but you can’t show it unless you get good projects.

What this means, in fewer words, is that it’s very difficult for a software engineer to prove he’s not a commodity programmer without hacking the politics. Perversely, many software environments can get into a state where engineering skill becomes negatively correlated with political success. For example, if the coding practices are “best practices”, “design pattern”-ridden Java culture, with FactoryVisitorSingletonSelection patterns all over the place, bad engineers have an advantage on account of being more familiar with damaged software environments, and because singleton directories called “com” don’t piss them off as much (since they never venture outside of an IDE anyway).

Software wants to be a meritocracy, but the sad reality is that effectiveness of an individual programmer depends on the environment. Drop a 1.8+ engineer into a Visitor-infested Java codebase and he turns into a bumbling idiot, in the same way that an incompetent player at a poker table can fluster experts (who may not be familiar with that particular flavor of incompetence). The result of this is that detecting who the good programmers are, especially for a non-programmer or an inept one, is extremely difficult, if not impossible. The 1.7 to 1.8 level is where software engineers realize that, in spite of their skill, they won’t be recognized as having it unless they can ensure a favorable environment and project allocation, and that it’s next to impossible to guarantee these benefits in the very long run without some kind of political advantage. Credibility as a software engineer alone won’t cut it, because you can’t establish that creativity unless you get good projects.

Enter the “X.W.P.” distinction, which is the alternative to being a “J.A.P.” It means an “X Who Programs”, where X might be an entrepreneur, a researcher, a data scientist, a security specialist, a quant, or possibly even a manager. If you’re an XWP, you can program, and quite possibly full-time, but you have an additional credibility that is rooted in something other than software engineering. Your work clearly isn’t commodity work; you might have a boss, but he doesn’t believe he could do your job better than you can. XWP is the way out. But you also get to code, so it’s the best of both worlds.

This might seem perverse and unusual. At 1.8, the best way to continue improving as a software engineer is not to study software engineering. You might feel like there’s still a lot to learn in that department, and you’re right, but loading up on unrecognized skill is not going to get you anywhere. It leads to bitterness and slow decline. You need something else.

One might think that an XWP is likely to grow as an X but not as a software engineer, but I don’t think that’s necessarily true. There certainly are quants and data scientists and entrepreneurs and game designers who remain mediocre programmers, but they don’t have to. If they want to become good engineers, they have an advantage over vanilla software engineers on account of the enhanced respect accorded their role. If a Chief Data Scientist decides that building a distributed system is the best way to solve a machine learning problem, and he’s willing to roll his sleeves up and write the code, the respect that this gives him will allow him to take the most interesting engineering work. This is how you get 1.8 and 2.1 and 2.4-level engineering work. You start to bill yourself as something other than a software engineer and get the respect that entitles you to projects that will make you better. You find an X and become an X, but you also know your way around a computer. You’re an X, and you know how to code, and your “secret weapon” (secret because management in most companies won’t recognize it) is that you’re really good at it, too.

This, perhaps, is the biggest surprise I’ve encountered in the bizarre world that is the software engineering career. I am so much without words that I’ll use someone else’s, from A Song of Ice and Fire: To go west, you must first go east.

What is spaghetti code?

One of the easiest ways for an epithet to lose its value is for it to become over-broad, which causes it to mean little more than “I don’t like this”. Case in point is the term, “spaghetti code”, which people often use interchangeably with “bad code”. The problem is that not all bad code is spaghetti code. Spaghetti code is an especially virulent but specific kind of bad code, and its particular badness is instructive in how we develop software. Why? Because individual people rarely write spaghetti code on their own. Rather, certain styles of development process make it increasingly common as time passes. In order to assess this, it’s important first to address the original context in which “spaghetti code” was defined: the dreaded (and mostly archaic) goto statement.

The goto statement is a simple and powerful control flow mechanism: jump to another point in the code. It’s what a compiled assembly program actually does in order to transfer control, even if the source code is written using more modern structures like loops and functions. Using goto, one can implement whatever control flows one needs. We also generally agree, in 2012, that goto is flat-out inappropriate for source code in most modern programs. Exceptions to this policy exist, but they’re extremely rare. Most modern languages don’t even have it.

Goto statements can make it difficult to reason about code, because if control can bounce about a program, one cannot make guarantees about what state a program is in when it executes a specific piece of code. Goto-based programs can’t easily be broken down into component pieces, because any point in the code can be wormholed to any other. Instead, they devolve into an “everything is everywhere” mess where to understand a piece of the program requires understanding all of it, and the latter becomes flat-out impossible for large programs. Hence the comparison to spaghetti, where following one thread (or noodle) often involves navigating through a large tangle of pasta. You can’t look at a bowl of noodles and see which end connects to which. You’d have to laboriously untangle it.

Spaghetti code is code where “everything is everywhere”, and in which answering simple questions such as (a) where a certain piece of functionality is implemented, (b) determining where an object is instantiated and how to create it, and (c) assessing a critical section for correctness, just to name a few examples of questions one might want to ask about code, require understanding the whole program, because of the relentless pinging about the source code that answer simple questions requires. It’s code that is incomprehensible unless one has the discipline to follow each noodle through from one side to the other. That is spaghetti code.

What makes spaghetti code dangerous is that it, unlike other species of bad code, seems to be a common byproduct of software entropy. If code is properly modular but some modules are of low quality, people will fix the bad components if those are important to them. Bad or failed or buggy or slow implementations can be replaced with correct ones while using the same interface. It’s also, frankly, just much easier to define correctness (which one must do in order to have a firm sense of what “a bug” is) over small, independent functions than over a giant codeball designed to do too much stuff. Spaghetti code is evil because (a) it’s a very common subcase of bad code, (b) it’s almost impossible to fix without causing changes in functionality, which will be treated as breakage if people depend on the old behavior (potentially by abusing “sleep” methods, thus letting a performance improvement cause seemingly unrelated bugs!) and (c) it seems, for reasons I’ll get to later, not to be preventable through typical review processes.

The reason I consider it important to differentiate spaghetti code from the superset, “bad code”, is that I think a lot of what makes “bad code” is subjective. A lot of the conflict and flat-out incivility in software collaboration (or the lack thereof) seems to result from the predominantly male tendency to lash out in the face of unskilled creativity (or a perception of such, and in code this is often an extremely biased perception): to beat the pretender to alpha status so badly that he stops pestering us with his incompetent displays. The problem with this behavior pattern is that, well, it’s not useful and it rarely makes people better at what they’re trying to do. It’s just being a prick. There are also a lot of anal-retentive wankbaskets out there who define good and bad programmers based on cosmetic traits so that their definition of “good code” is “code that looks like I wrote it”. I feel like the spaghetti code problem is better-defined in scope than the larger but more subjective problem of “bad code”. We’ll never agree on tabs-versus-spaces, but we all know that spaghetti code is incomprehensible and useless. Moreover, as spaghetti code is an especially common and damaging case of bad code, assessing causes and preventions for this subtype may be generalizable to other categories of bad code.

People usually use “bad code” to mean “ugly code”, but if it’s possible to determine why a piece of code is bad and ugly, and to figure out a plausible fix, it’s already better than most spaghetti code. Spaghetti code is incomprehensible and often unfixable. If you know why you hate a piece of code, it’s already above spaghetti code in quality, since the latter is just featureless gibberish.

What causes spaghetti code? Goto statements were the leading cause of spaghetti code at one time, but goto has fallen so far out of favor that it’s a non-concern. Now the culprit is something else entirely: the modern bastardization of object-oriented programming. Inheritance is an especially bad culprit, and so is premature abstraction: using a parameterized generic with only one use case in mind, or adding unnecessary parameters. I recognize that this claim– that OOP as practiced is spaghetti code– is not a viewpoint without controversy. Nor was it without controversy, at one time, that goto was considered harmful.

One of the biggest problems in comparative software (that is, the art of comparing approaches, techniques, languages, or platforms) is that most comparisons focus on simple examples. At 20 lines of code, almost nothing shows its evilness, unless it’s contrived to be dastardly. A 20-line program written with goto will usually be quite comprehensible, and might even be easier to reason about than the same program written without goto. At 20 lines, a step-by-step instruction list with some explicit control transfer is a very natural way to envision a program. For a static program (i.e. a platonic form that need never be changed and incurs no maintenance) that can be read in one sitting, that might be a fine way to structure it. At 20,000 lines, the goto-driven program becomes incomprehensible. At 20,000 lines, the goto-driven program has been hacked and expanded and tweaked so many times that the original vision holding the thing together has vanished, and the fact that a program can be in a piece of code “from anywhere” means that to safely modify the code requires confidence quantified by “from everywhere”. Everything is everywhere. Not only does this make the code difficult to comprehend, but it means that every modification to the code is likely to make it worse, due to unforeseeable chained consequences. Over time, the software becomes “biological”, by which I mean that it develops behaviors that no one intended but that other software components may depend on in hidden ways.

Goto failed, as a programming language construct, because of these problems imposed by the unrestricted pinging about a program that it created. Less powerful, but therefore more specifically targeted, structures such as procedures, functions, and well-defined data structures came into favor. For the one case where people needed global control flow transfer (error handling) exceptions were developed. This was a progress from the extreme universality and abstraction of a goto-driven program to the concretion and specificity of pieces (such as procedures) solving specific problems. In unstructured programming, you can write a Big Program that does all kinds of stuff, add features on a whim, and alter the flow of the thing as you wish. It doesn’t have to solve “a problem” (so pedestrian…) but it can be a meta-framework with an embedded interpreter! Structured programming encouraged people to factor their programs into specific pieces that solved single problems, and to make those solutions reusable when possible. It was a precursor of the Unix philosophy (do one thing and do it well) and functional programming (make it easy to define precise, mathematical semantics by eschewing global state).

Another thing I’ll say about goto is that it’s rarely needed as a language-level primitive. One could achieve the same effect using a while-loop, a “program counter” variable defined outside that loop that the loop either increments (step) or resets (goto) and a switch-case statement using it. This could, if one wished, be expanded into a giant program that runs as one such loop, but code like this is never written. What the fact that this is almost never done seems to indicate is that goto is rarely needed. Structured programming thereby points out the insanity of what one is doing when attempting severely non-local control flows.

Still, there was a time when abandoning goto was extremely controversial, and this structured programming idea seemed like faddish nonsense. The objection was: why use functions and procedures when goto is strictly more powerful?

Analogously, why use referentially transparent functions and immutable records when objects are strictly more powerful? An object, after all, can have a method called run or call or apply so it can be a function. It can also have static, constant fields only and be a record. But it can also do a lot more: it can have initializers and finalizers and open recursion and fifty methods if one so chooses. So what’s the fuss about this functional programming nonsense that expects people to build their programs out of things that are much less powerful, like records whose fields never change and whose classes contain no initialization magic?

The answer is that power is not always good. Power, in programming, often advantages the “writer” of code and not the reader, but maintenance (i.e. the need to read code) begins subjectively around 2000 lines or 6 weeks, and objectively once there is more than one developer on a project. On real systems, no one gets to be just a “writer” of code. We’re readers, of our own code and of that written by others. Unreadable code is just not acceptable, and only accepted because there is so much of it and because “best practices” object-oriented programming, as deployed at many software companies, seem to produce it. A more “powerful” abstraction is more general, and therefore less specific, and this means that it’s harder to determine exactly what it’s used for when one has to read the code using it. This is bad enough, but single-writer code usually remains fairly disciplined: the powerful abstraction might have 18 plausible uses, but only one of those is actually used. There’s a singular vision (although usually an undocumented one) that prevents the confusion. The danger sets in when others who are not aware of that vision have to modify the code. Often, their modifications are hacks that implicitly assume one of the other 17 use cases. This, naturally, leads to inconsistencies and those usually result in bugs. Unfortunately, people brought in to fix these bugs have even less clarity about the original vision behind the code, and their modifications are often equally hackish. Spot fixes may occur, but the overall quality of the code declines. This is the spaghettification process. No one ever sits down to write himself a bowl of spaghetti code. It happens through a gradual “stretching” process and there are almost always multiple developers responsible. In software, “slippery slopes” are real and the slippage can occur rapidly.

Object-oriented programming, originally designed to prevent spaghetti code, has become (through a “design pattern” ridden misunderstanding of it) one of the worst sources of it. An “object” can mix code and data freely and conform to any number of interfaces, while a class can be subclassed freely about the program. There’s a lot of power in object-oriented programming, and when used with discipline, it can be very effective. But most programmers don’t handle it well, and it seems to turn to spaghetti over time.

One of the problems with spaghetti code is that it forms incrementally, which makes it hard to catch in code review, because each change that leads to “spaghettification” seems, on balance, to be a net positive. The plus is that a change that a manager or customer “needs yesterday” gets in, and the drawback is what looks like a moderate amount of added complexity. Even in the Dark Ages of goto, no one ever sat down and said, “I’m going to write an incomprehensible program with 40 goto statements flowing into the same point.”  The clutter accumulated gradually, while the program’s ownership transferred from one person to another. The same is true of object-oriented spaghetti. There’s no specific point of transition from an original clean design to incomprehensible spaghetti. It happens over time as people abuse the power of object-oriented programming to push through hacks that would make no sense to them if they understood the program they were modifying and if more specific (again, less powerful) abstractions were used. Of course, this also means that fault for spaghettification is everywhere and nowhere at the same time: any individual developer can make a convincing case that his changes weren’t the ones that caused the source code to go to hell. This is part of why large-program software shops (as opposed to small-program Unix philosophy environments) tend to have such vicious politics: no one knows who’s actually at fault for anything.

Incremental code review is great at catching the obvious bad practices, like mixing tabs and spaces, bad variable naming practices, and lines that are too long. That’s why the more cosmetic aspects of “bad code” are less interesting (using a definition of “interesting” synonymous with “worrisome”) than spaghetti code. We already know how to solve them in incremental code review. We can even configure our continuous-integration servers to reject such code. As for spaghetti code, where there is no clear definition, this is difficult if not impossible to do. Whole-program review is necessary to catch that, but I’ve seen very few companies willing to invest the time and political will necessary to have actionable whole-program reviews. Over the long term (10+ years) I think it’s next to impossible, except among teams writing life- or mission-critical software, to ensure this high level of discipline in perpetuity.

The answer, I think, is that Big Code just doesn’t work. Dynamic typing falls down in large programs, but static typing fails in a different way. The same is true of object-oriented programming, imperative programming, and to a lesser but still noticeable degree (manifest in the increasing number of threaded state parameters) in functional programming. The problem with “goto” wasn’t that goto was inherently evil, so much as that it allowed code to become Big Code very quickly (i.e. the threshold of incomprehensible “bigness” grew smaller). On the other hand, the frigid-earth reality of Big Code is that there’s “no silver bullet”. Large programs just become incomprehensible. Complexity and bigness aren’t “sometimes undesirable”. They’re always dangerous. Steve Yegge got this one right.

This is why I believe the Unix philosophy is inherently right: programs shouldn’t be vague, squishy things that grow in scope over time and are never really finished. A program should do one thing and do it well. If it becomes large and unwieldy, it’s refactored into pieces: libraries and scripts and compiled executables and data. Ambitious software projects shouldn’t be structured as all-or-nothing single programs, because every programming paradigm and toolset breaks down horribly on those. Instead, such projects should be structured as systems and given the respect typically given to such. This means that attention is paid to fault-tolerance, interchangeability of parts, and communication protocols. It requires more discipline than the haphazard sprawl of big-program development, but it’s worth it. In addition to the obvious advantages inherent in cleaner, more usable code, another benefit is that people actually read code, rather than hacking it as-needed and without understanding what they’re doing. This means that they get better as developers over time, and code quality gets better in the long run.

Ironically, object-oriented programming was originally intended to encourage something looking like small-program development. The original vision behind object-oriented programming was not that people should go and write enormous, complex objects, but that they should use object-oriented discipline when complexity is inevitable. An example of success in this arena is in databases. People demand so much of relational databases in terms of transactional integrity, durability, availability, concurrency and performance that complexity is outright necessary. Databases are complex beasts, and I’ll comment that it has taken the computing world literally decades to get them decent, even with enormous financial incentives to do so. But while a database can be (by necessity) complex, the interface to one (SQL) is much simpler. You don’t usually tell a database what search strategy to use; you write a declarative SELECT statement (describing what the user wants, not how to get it) and let the query optimizer take care of it. 

Databases, I’ll note, are somewhat of an exception to my dislike of Big Code. Their complexity is well-understood as necessary, and there are people willing to devote their careers entirely to mastering it. But people should not have to devote their careers to understanding a typical business application. And they won’t. They’ll leave, accelerating the slide into spaghettification as the code changes hands.

Why Big Code? Why does it exist, in spite of its pitfalls? And why do programmers so quickly break out the object-oriented toolset without asking first if the power and complexity are needed? I think there are several reasons. One is laziness: people would rather learn one set of general-purpose abstractions than study the specific ones and when they are appropriate. Why should anyone learn about linked lists and arrays and all those weird tree structures when we already have ArrayList? Why learn how to program using referentially transparent functions when objects can do the trick (and so much more)? Why learn how to use the command line when modern IDEs can protect you from ever seeing the damn thing? Why learn more than one language when Java is already Turing-complete? Big Code comes from a similar attitude: why break a program down into small modules when modern compilers can easily handle hundreds of thousands of lines of code? Computers don’t care if they’re forced to contend with Big Code, so why should we?

However, more to the point of this, I think, is hubris with a smattering of greed. Big Code comes from a belief that a programming project will be so important and successful that people will just swallow the complexity– the idea that one’s own DSL is going to be as monumental as C or SQL. It also comes from a lack of willingness to declare a problem solved and a program finished even when the meaningful work is complete. It also comes from a misconception about what programming is. Rather than existing to solve well-defined problems and then get out of the way, as small-program methodology programs do, they become more than that. Big Code projects often have an overarching and usually impractical “vision” that involves generating software for software’s sake. This becomes a mess, because “vision” in a corporate environment is usually bike-shedding that quickly becomes political. Big Code programs always reflect the political environment that generated them (Conway’s Law) and this means that they invariably look more like collections of parochialisms and inside humor than the more universal languages of mathematics and computer science.

There is another problem in play. Managers love Big Code, because when the programmer-to-program relationship is many-to-one instead of one-to-many, efforts can be tracked and “headcount” can be allocated. Small-program methodology is superior, but it requires trusting the programmers to allocate their time appropriately to more than one problem, and most executive tyrannosaurs aren’t comfortable doing that. Big Code doesn’t actually work, but it gives managers a sense of control over the allocation of technical effort. It also plays into the conflation of bigness and success that managers often make (cf. the interview question for executives, “How many direct reports did you have?”) The long-term spaghettification that results from Big Code is rarely an issue for such managers. They can’t see it happen, and they’re typically promoted away from the project before this becomes an issue.

In sum, spaghetti code is bad code, but not all bad code is spaghetti. Spaghetti is a byproduct of industrial programming that is usually, but not always, an entropic result of too many hands passing over code, and an inevitable outcome of large-program methodologies and the bastardization of “object-oriented programming” that has emerged out of these defective, executive-friendly processes. The antidote to spaghetti is an aggressive and proactive refactoring effort focused on keeping programs small, effective, clean in source code, and most of all, coherent.

Don’t look now, but Valve just humiliated your “corporate culture”.

The game company Valve has gotten a lot of press recently for, among other things, its unusual corporate culture in which employees are free to move to whatever project they choose. There’s no “transfer process” to go through when an employee decides to move to another team. They just move. This is symbolized by placing wheels under each desk. People are free to move as they are capable. Employees are trusted with their time and energy. And it works.

Surely this can’t work for larger companies, can it? Actually, I’d argue that Valve has found the only solution that actually works. When companies trust their employees to self-organize and allocate their time as they will, the way to make sure that unpleasant but important work gets done is to provide an incentive: a leadership position or a promotion or a bonus to the person who rolls up her sleeves and solves this problem. That’s “expensive”, but it actually works. (Unpleasant and unimportant projects don’t get done, as they shouldn’t.) The more traditional, managerial alternative is to “assign” someone to that project and make it hard for her to transfer until some amount of time has been “served” on the shitty project. There’s a problem. First, the quality of the work done when someone tells a newcomer, “Complete this or I’ll fire you” is just not nearly as good as the work you get when you tell someone competent, “This project will be unpleasant but it will lock in your next promotion.” Second, it tends to produce mediocrity. The best people have better options than to pay 18 months of dues before having the option (not a guarantee, but the right to apply) to transfer to something better, so the ones who remain and suffer tend to be less talented. Good people are always going to move around in search of the best learning opportunity (that’s how they got to be good) so it’s counterproductive to force them into external transfer with a policy that makes it a lot harder to get onto a decent project than to get hired.

Valve actually Solved It with the elegance of a one-line mathematical proof. They’ve won the cultural battle. The wheels under the desk are a beautiful symbol, as well: an awesome fuck-you to every company that thinks providing a Foosball table constitutes a “corporate culture” worth giving a shit about. They’ve also, by demonstration of an alternative, shown a generation of technology workers how terrible their more typical, micromanaged jobs are. What good is an array of cheap perks (8:30 pm pizza!) if people aren’t trusted to choose what to work on and direct their own careers?

I think, however, that Valve’s under-desk wheels have solved an even deeper problem. What causes corporations to decay? A lot of things, but hiring and firing come to mind. Hiring the wrong people is toxic, but there are very few objectively bad employees. Mostly, it comes down to project/person fit. Why not maximize the chance of a successful fit by letting the employee drive? Next, firing. It always does damage. Even when there is no question that it’s the right decision to fire someone, it has undesirable cultural side effects. Taking a short-term, first-order perspective, most companies could, in theory, become more productive if they fired the bottom 30 percent of their workforce. In practice, virtually no company would do this. It would be cultural suicide, and the actual effect on productivity would be catastrophic. So very companies execute 30 percent layoffs lightly. Nonetheless, good people do get fired, and it’s not a rare occasion, and it’s incredibly damaging. What drives this? Enter the Welch Effect, named for Jack Welch, inventor of stack-ranking, which answers the question of, “Who is most likely to get fired when a company has to cut people?” The answer: junior people on macroscopically underperforming teams. Why is this suboptimal? Because the junior members of this team are the ones least responsible for its underperformance.

Companies generally allocate bonuses and raises by having the CEO or board divide a pool of money among departments for the VPs to further subdivide, and so on, with leaf-level employees at the end of this propagation chain. The same process is usually used for layoffs, which means that an employee’s chance of getting nicked is a function of his team’s macroscopic performance, rather than her individual capability. Junior employees, who rarely make the kinds of major decisions that would result in macroscopic team underperformance, still tend to be the first ones to go. They don’t have the connections within the company or open transfer opportunities that would protect them. It’s not firing “from the top” or the middle or the bottom. It’s firing mostly new people randomly, which destroys the culture rapidly. Once people see a colleague unfairly fired, they tend to distrust that there’s any fairness in the company at all.

Wheels under the desk, in addiction to creating a corporate culture actually worth caring about, eliminate the Welch Effect. This inspires people to genuinely work hard and energetically, and makes bad hires and firings rare and transfer battles nonexistent.

Moreover, the Valve way of doing things is the only way, for white-collar work at least, that actually makes sense. Where did companies get the idea that, before anyone is allowed to spend time on it, a project needs to be “allocated” “head count” by a bunch of people who don’t even understand the work being done? I don’t know where that idea comes from, but its time is clearly long past over.

4 things you should probably never do at work.

I don’t like lists, and I don’t really like career advice, because both tend to play on peoples’ need for simple answers and to have obvious advice thrown at them telling them what they already know. But here we go. I hope that in addition to these items, readers will be patient enough to find the connecting theme, which I’ll reveal at the end. Here are 4 things one should never do at work. I say this not from a position of moral superiority, having made most of these mistakes in the past, but with the intention of relaying some counterintuitive observations about work and what not to do there, and why not.

1. Seemingly harmless “goofing off”. I’m talking about Farmville and Facebook and CNN and Reddit Politics and possibly even Hacker News. You know, that 21st-century institution of at-work web-surfing. It’s the reason no decent website publishes timestamps on social interaction, instead preferring intervals such as “3 days ago”; being run by decent human beings, they don’t want to “play cop” against everyday bored workers.

I don’t think it’s wise to “goof off” at work. That’s not because I think people are at risk of getting caught (if goofing off were treated as a fireable offense, nearly the whole country would be unemployed). Direct professional consequences for harmless time-wasting are pretty rare, unless it reaches the point (3+ hours per day) where it’s visibly leading to unacceptable work performance. Moreover, I say this not because I’m some apologist trying to encourage people to be good little corporate citizens. That couldn’t be farther from the truth. There’s a counterintuitive and entirely selfish reason why wasting time on the clock is a bad idea: it makes the time-wasters unhappy.

Yes, unhappy. People with boring jobs think that their web-surfing makes their work lives more bearable, but it’s not true. The distractions are often attractive for the first few minutes, but end up being more boring than the work itself.

Here’s the secret of work for most people: it’s not that boring. Completely rote tasks have been automated away, or will be soon. Most people aren’t bored at work because the work is intrinsically boring, but because the low-level but chronic social anxiety inflicted by the office environment (and the subordinate context, which makes everything less enjoyable) impairs concentration and engagement just enough to make the mundane-but-not-really-boring tasks ennui-inducing. It’s not work that makes people unhappy, but the environment.

Working from home is a solution for some people, but if there isn’t pre-existing trust and a positive relationship with manager, this can cause as many problems as it solves. In the age of telecommunications, “the environment” is not defined by the Euclidean metric. It’s the power relationships more than the noise and crowding that make most work environments so toxic, and those don’t go away just because of a physical separation– not in the age of telecommunications.

I read once about a study where people were expected to read material amid low-level stressors and distractions and they attributed their poor performance not to the environment but to the material being “boring”, while control-group subjects (who comprehended the material well) found it interesting. In other words, the subjects who suffered a subliminally annoying office-like environment attributed their lack of focus to “boring” material, when there was no basis for that judgment. They misattributed the problem because the environment wasn’t quite annoying enough to be noticeably distracting. The same thing happens at probably 90 percent of jobs out there. People think it’s the work that bores them, but it’s the awful environment making it hard to concentrate that bores them. Unfortunately, ergonomic consultants and lighting specialists aren’t going to solve this environmental problem. The real problem is the power relationships, and the only long-term solution is for the worker to become so good at what she does as to lose any fear of getting fired– but this takes time, and a hell of a lot of work. No one gets to that point from Farmville.

How does Internet goofing-off play into this? Well, it’s also boring, but in a different way, because there’s no desire to perform. No one actually cares about Reddit karma in the same way they care about getting promoted and not getting fired. This reprieve makes the alternative activity initially attractive, but the unpleasant and stressful environment remains exactly as it was, so boredom sets in again– only a few minutes into the new activity. So a person goes from being bored and minimally productive to being bored and unproductive, which leads to a stress spike come standup time (standup: the poker game where you cannot fold; if you don’t have cards you must bluff) which leads to further low productivity.

Also, actually working (when one is able to do so) speeds up the day. Typical work is interesting enough that a person who becomes engaged in it will notice time passing faster. The stubborn creep of the hours turns rapid. It’s Murphy’s Law: once there’s something in front of you that you actually care about getting done, time will fucking move.

People who fuck around on the Internet at work are lengthening (subjectively speaking) their workdays considerably. Which means they absorb an enhanced dose of office stress, and worse “spikes” of stress out of the fear of being discovered wasting so much time. Since it’s the social anxiety and not the actual work that is making them so fucking miserable, this is a fool’s bargain.

Don’t waste time at work. This isn’t a moral imperative, because I don’t give a shit whether people I’ve never met fuck off at their jobs. It’s practical advice. Doing what “The Man” wants may be selling your soul, but when you subject yourself to 8 hours of low-grade social anxiety whilst doing even more pointless non-work, you’re shoving your soul into a pencil sharpener.

2. Working on side projects. The first point pertains to something everyone has experienced: boredom at work. Even the best jobs have boring components and long days and annoyances, and pretty much every office environment (even at the best companies) sucks. This is fairly mundane. What I think is unique about my approach is the insight that work is always a better antidote for work malaise than non-work. Just work. Just get something done.

People who fuck around on Facebook during the work day don’t have an active dislike for their jobs. They don’t want to “stick it to the man”. They don’t see what they’re doing as subversive or even wrong, because so many people do it. They just think they’re making their boredom go away, while they’re actually making it worse.

Some people, on the other hand, hate their jobs and employers, or just want to “break out”, or feel they have something better to do. Some people have good reasons to feel this way. There’s a solution, which is to look for another job, or to do a side project, or both. But there are some who take a different route, which is to build their side projects while on the job. They write the code and contact clients (sometimes using their corporate email “to be taken more seriously”) while they’re supposed to be doing paid work. This doesn’t always involve “hatred” of the existing employer; sometimes it’s just short-sighted greed and stupidity.

Again, I’m not saying “don’t do this” because I represent corporate stoogery or want to take some moral position. This is a practical issue. Some people get fired for this, but that’s a good outcome compared to what can happen, which is for the company to assert ownership over the work. I’ve seen a couple of people get personally burned for this, having to turn in side projects over which their companies asserted rights for no reason other than spite (the projects weren’t competing projects). They lost their jobs and the project work.

If you have a good idea for a side project at work, write the key insights down on a piece of paper and forget about them until you get home. If you must, do some reading and learning on the clock, but do not use company resources to build and do not try to send code from your work computer to your home machine. Just don’t. If you care about this side project, it’s worth buying your own equipment and getting up at 5:00 am.

3. Voicing inconsequential opinions. The first two “should be” obvious, despite the number of people who fall into those traps. This third one took me a while to learn. It’s not that voicing an opinion at work is bad. It’s good. However, it’s only good if that opinion will have some demonstrable career-improving effect, preferably by influencing a decision. A good (but not always accurate) first-order approximation is to only voice an opinion if there’s a decent chance that the suggestion may be acted upon. This doesn’t mean that it’s suicidal to voice an opinion when an alternate decision is made; it does mean you shouldn’t voice opinions if you know that it won’t have any effect on the decision.

No one ever became famous for preventing a plane crash, and no one ever got good marks for trying to prevent a plane crash and failing. There’s no, “I told you so” in the corporate world. Those who crashed the plane may be fired or demoted, but they won’t be replaced by the Cassandras who predicted the thing would happen. (If anything, they’re blamed for “sabotage” even if there’s no way they could have caused it.) Instead, they’ll be replaced by other cronies of the powerful people, and no one gets to be a crony by complaining.

This rule could be stated as “Don’t gossip”, but I think my formulation goes beyond that. Most of the communication that I’m advising against is not really “office gossip” because it’s socially harmless. Going to lunch and bashing a bad decision by upper management, in a large company, isn’t very likely to have any professional consequences. Upper management doesn’t care what support workers say about them at lunch. But this style of “venting” doesn’t actually make the venters feel better in the long run. People vent because they want to “feel heard” by people who can help them, but most venting that occurs in the workplace is from one non-decision-maker to another.

The problem with venting is that, in 2012, long (8+ years) job tenures are rare, but having one is still an effective way to get a leadership position in many organizations. If nothing else, a long job tenure on a resume suggests stability and success and can result in a leadership position if it doesn’t come from that career itself. Now, it can sometimes be advantageous to “job hop”, but most people would be better off getting their promotions in the same company if able to do so. Long job tenures look good. Short ones don’t. Now, there are good reasons to change jobs, even after a short tenure, but people should always be playing to have the long tenure as an option (even if they don’t plan on taking it). Why speed up the resentment clock?

Also, social intercourse that seems “harmless” may not be. I worked at a company that claimed to be extremely open to criticism and anti-corporate. There was also a huge “misc” mailing list largely dedicated to rants and venting about the (slow but visible) decline of the corporate culture. This was at a company with some serious flaws, but on the whole a good company even now; if you got the right project and boss, the big-company decline issues wouldn’t even concern you. In any case, this mailing list seemed inconsequential and harmless… until a higher-up informed me that showing up on the top-10 for that mailing list pretty much guaranteed ending up on a PIP (the humiliation process that precedes firing). This company had a 5% cutoff for PIPs, which is a pretty harsh bar in an elite firm, and a mailing-list presence pretty much guaranteed landing in the bucket.

Opinions and insights, even from non-decision-makers, are information. Information is power. Remember this.

4. Working long hours. This is the big one, and probably unexpected. The first 3? Most people figure them out after a few years. I doubt there are few people who are surprised by points 1, 2, and 3. So why do people keep making first-grade social mistakes at work? Because they sacrifice too fucking much. When you sacrifice too much, you care too much. When you care too much, you fail socially. When you fail socially, you fail at work (in most environments). And no one ever got out from under a bad performance review or (worse yet) a termination by saying, “But I worked 70 hours per week!”

The “analyst” programs in investment banking are notorious for long hours, and were probably at their worst in 2007 during the peak of the financial bubble. I asked a friend about his 110-hour weeks and how it affected him, and he gave me this explanation: “You don’t need to be brilliant or suave to do it, but you need to be intellectually and socially average– after a double all-nighter.” In other words, it was selection based on decline curve rather than peak capability.

Some of the best and strongest people have the worst decline curves. Creativity, mental illness, and sensitivity to sleep deprivation are all interlinked. When people start to overwork, the world starts to go fucking nuts. Absurd conflicts that make no sense become commonplace and self-perpetuating.

Unless there’s a clear career benefit to doing so, no one should put more than 40 hours per week into metered work. By “metered” work, I mean work that’s expected specifically in the employment context, under direction from a manager, typically (in most companies) with only a token (or sometimes no) interest in the employee’s career growth. And even 40 is high: I just use that number because it’s the standard. Working longer total hours is fine but only in the context of an obvious career goal: a side project, an above-normal mentorship arrangement, continued learning and just plain “keeping up” with technology changes. Self-directed career investment should get the surplus hours if you have the energy to work more than 40.

In general, leading the pack in metered work output isn’t beneficial from a career perspective. People don’t get promoted for doing assigned work at 150% volume, but for showing initiative, creativity, and high levels of skill. That requires a different kind of hard work that is more self-directed and that takes a long time to pay off. I don’t expected to get immediately promoted for reading a book about programming language theory or machine learning, but I do know that it will make me more capable of hitting the high notes in the future.

Historically, metered work expectations of professionals were about 20 to 25 hours per week. The other 20-25 hours were to be invested in career advancement, networking, and continuing education that would improve the professional’s skill set over time. Unfortunately, the professional world now seems to be expecting 40 hours of metered work, expecting employees to keep the “investment” work “on their own time”. This is suboptimal: it causes a lot of people to change jobs quickly. If you’re full to the brim on metered work, then you’re going to leave your job as soon as you stop learning new things from the metered work (that’s usually after 9 to 24 months). Google attempted to remedy this with “20% time”, but that has largely failed due to the complete authority managers have to destroy their subordinates in “Perf” (which also allows anonymous unsolicited peer review, an experiment in forcible teledildonics) for using 20% time. (Some Google employees enjoy 20%-time, but only with managerial blessing. Which means you have the perk if you have a nice manager… but if you have a good manager, you don’t need formal policies to protect you in the first place. So what good does the policy do?)

Worse yet, when people start working long hours because of social pressures, something subversive happens. People get huge senses of entitlement and start becoming monstrously unproductive. (After all, if you’re spending 12 hours in the office, what’s 15 minutes on Reddit? That 15 minutes becomes 30, then 120, then 300…) Thus, violations of items #1, #2, and #3 on this list become commonplace. People start spending 14 hours in the office and really working during 3 of them. That’s not good for anyone.

It would be easy to blame this degeneracy pattern on “bad managers”, like the archetypical boss who says, “If you don’t come in on Saturday, then don’t bother coming in on Sunday.” The reality, though, is that it doesn’t take a bad boss to get people working bad hours. Most managers actually know that working obscene hours is ineffective and unhealthy and don’t want to ask for that. Rather, people fall into that pattern whenever sacrifice replaces productivity as the performance measure, and I’ll note that peers are often as influential in assessment as managers. It’s often group pressure rather than managerial edict that leads to the ostracism of “pikers”. When people are in pain, all of this happens very quickly.

Then the “death march” mentality sets in. Fruitless gripes beget fruitless gripes (see #3) and morale plummets. Productivity declines, causing managerial attention, which often furthers the problem. People seek avoidance patterns and behavioral islands (see #1) that provide short-term relief to the environment that’s falling to pieces, but it does little good in the long-term. The smarter ones start looking to set up other opportunities (see #2) but if they get caught, get the “not a team player” label (a way of saying “I don’t like the guy” that sounds objective) and that’s basically nuts-up.

Unless there’s an immediate career benefit in doing so, you’re a chump if you load up on the metered work. You shouldn’t do “the minimum not to get fired” (that line is low, but don’t flirt with it; stay well north of that one). You should do more than that; enough metered work to fit in. Not less, not more. (Either direction of deviation will hurt you socially.) Even 40 hours, for metered work, is a very high commitment when you consider that the most interesting professions require 10-20 hours of unmetered work just to remain connected and current, but it’s the standard so it’s probably what most people should do. I wouldn’t say that it’s wise to go beyond it, and if you are going to make the above-normal sacrifice of a 45+ hour work week, do yourself a favor and sock some learning (and connections) away in unmetered work.

So yeah… don’t work long hours. And something about sunscreen.

Ambition: version 8♦ is now out.

The latest rules for Ambition are here. Mostly, the changes are simplifications to the scoring system in order to increase the game’s mass appeal and, hopefully, virality. I don’t believe that I’ve compromised on the game design in doing so; the intrinsic strategic complexity remains, but the cognitive overload associated with the scoring system has been trimmed back a bit.

I’m planning, after years of delinquency on this matter, to release an online version late this year, but I’ve wanted to get the rules to a steady state before doing so. This iteration, I am pretty sure, is or is near the final one, at least as far as the core rules go. There are a few unresolved questions that I have about the scoring system, but I’m going to wait until I have real data (from players, not simulations) before making those calls.

The roadmap from here looks like this. Currently, I’m working on a command-line console version– an embarrassing minimal viable product (“MVP”)– that I plan on releasing this August. The source code will be on Github and open-source; card games themselves are rather trivial to implement, so there’s no point in hiding that code. The first iteration will be a tutorial (with players making random legal moves) more than a game, designed with the intent of helping people learn Ambition interactively rather than from a dry rules document.

After “launching” this MVP, the next project will be to create real players for a single-player version. As for AI, I have a machine-learning approach in mind that I think will work. That might take a month or two (since this is purely a weekend side project) to implement and run, but I’d like to have that together by mid-autumn. This means there should be real AI players. I have no idea whether they’ll be any good at the game. I may crowd-fund this by creating a KickStarter project to solve the AI problem and giving a percentage away to the person who writes the best player.

After that, I’ll start working on the front-end (like, a real app) of the game, noting that most people are not interested in downloading a command-line card game, and also that people prefer to play against real people rather than AI. I’ve been doing back-end software for my whole career so I have no idea what that will entail or how difficult it will be, but I look forward to the learning experience.

Six languages to master.

Eric Raymond, in “How to Become a Hacker“, recommended five languages: C, Java, Python, Perl, and Lisp. Each he recommended for different reasons: Python and Java as introductory languages, C to understand and hack Unix, Perl because of its use in scripting, and Lisp for, to use his words which are so much better than anything I can come up with, the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot.

It’s 2012. Many languages have come to the fore that didn’t exist when this essay was written. Others, like Perl, have faded somewhat. What is today’s five-language list? I won’t pretend that my answer is necessarily the best; it’s biased based on what I know. That said, I’d think the 5 highest-return languages for people who want to become good engineers are the following, and in this order: Python, C, ML, Clojure, and Scala.

Why these 5? Python I include because it’s easy to learn and, in-the-small, extremely legible. I’d rather not use it for a large system, but people who are just now learning to program are not going to be writing huge systems. They’re not going to be pushing major programs into production. At least, they shouldn’t be. What they should be able to do is scrape a webpage or build a game or investigate an applied math problem and say, “Shit, that’s cool.” Python gets people there quickly. That will motivate them to get deeper into programming. Python is also a language that is not great at many things, but good at one hell of a lot of them. It’s quick to write, legible in the small, and expressive. It allows imperative and functional styles. It has great libraries, and it has strong C bindings, for when performance is needed.

People who are getting started in programming want to do things that are macroscopically interesting from a beginner’s perspective. They don’t just want to learn about algorithms and how compilers work, because none of that’s interesting to them until they learn more of the computer science that motivates the understanding of why these things are important. Compilers aren’t interesting until you’ve written programs in compiled languages. At the start, people want to be able to write games, scrape webpages, and do simple systems tasks that come up. Python is good because it’s relatively easy to do most programming tasks in it.

After Python, C is a good next choice, and not because of its performance. That’s largely irrelevant to whether it will make someone a better programmer (although the confidence, with regard to understanding performance, that can come with knowing C is quite valuable). C is crucial because there’s a lot of computer science that becomes inaccessible if one sticks to higher-level languages (and virtual machines) like Java, C#, and Python. Garbage collection is great, but what is the garbage collector written in? C. As is Unix, notably. For all this, I think C is a better choice than C++ because there’s another important thing about C: C++ is a mess and it’s not clear whether it’s a good language for more than 1% of the purposes to which it’s put, but C, on the other hand, has utterly dominated the mid-level language category. For all its flaws, C is (like SQL for database query languages) a smashing, undisputed success, and for good reasons. The high-level language space is still unsettled, with no clear set of winners, but the midlevel language used to write the runtimes and garbage collectors of those high level languages is usually C, and will be for some time.

Python and C give a person coverage of the mid- and very-high levels of language abstraction. I’m avoiding including low-level (i.e. assembly) languages because I don’t think any of them have the generalist’s interest that would justify top-5 placement. Familiarity with assembly language and how it basically works is a must, but I don’t think mastery of x86 intricacies is necessary for most programmers.

Once a programmer’s fluent in Python and C, we’re talking about someone who can solve most coding problems, but improvement shouldn’t end there. Taste is extremely important, and it’s lack of taste rather than lack of intellectual ability that has created the abundance of terrible code in existence. Languages can’t inherently force people to learn taste, but a good starting point in this direction is ML: SML or OCaml with the “O” mostly not used.

ML has been described as a “functional C” for its elegance. It’s fast, and it’s a simple language, but its strong functional programming support makes it extremely powerful. It also forces people to program from the bottom up. Instead of creating vague “objects” that might be hacked into bloated nonsense over the lifespan of a codebase, they create datatypes (mostly, records and discriminated unions, with parameterized types available for polymorphism) out of simpler ones, and use referentially transparent functions as the basic building blocks of most of their programs. This bottom-up structure forces people to build programs on sound principles (rather than the vague, squishy abstractions of badly-written object-oriented code) but ML’s high-level capability brings people into the awareness that one can write complex software using a bottom-up philosophy. Python and C teach computer science at higher and lower levels, but ML forces a programmer to learn how to write good code.

There’s also something philosophical that Python, C, and Ocaml tend to share that C++ and Java don’t: small-program philosophy, which is generally superior. I’ve written at length about the perils of the alternative. In these languages, it’s much more uncommon to drag in the rats’ nest of dependencies associated with large Java projects. For an added bonus, you never have to look at those fucking ugly singleton directories called “com”. Once a person has used these three languages to a significant extent, one gets a strong sense of how small-program development works and why immodular, large-project orientation is generally a bad thing.

When you write C or Ocaml or Python, you get used to writing whole programs that accomplish something. There’s a problem, you solve it, and you’re done. Now you have a script, or a library, or a long-running executable. You may come back to it to improve it, but in general, you move on to something else, while the solution you’ve created adds to the total stored value of your code repository. That’s what’s great about small-program development: problems are actually solved and work is actually “done” rather than recursively leading to more work without any introspection on whether the features being piled on the work queue make sense. Developers who only experience large-program development– working on huge, immodular Java projects in IDEs a million metaphorical miles from where the code actually runs for real– never get this experience of actually finishing a whole program.

Once a person has grasped ML, we’re talking about a seriously capable programmer, even though ML isn’t a complex language. Learned in the middle of one’s ML career is a point to which I’ll return soon, but for now leave hanging: types are interesting. One of the most important things to learn from ML is how to use the type system to enforce program correctness: it generates a massive suite of implicit unit tests that (a) never have to be written, and (b) don’t contribute to codebase size. (Any decent programmer knows that “lines of code” represent expenditure, not accomplishment.)

The fourth language to learn is Clojure, a Lisp that happens to run on the JVM. The JVM has its warts, but it’s powerful and there are a lot of good reasons to learn that ecosystem, and Clojure’s a great entry point. A lot of exciting work is being done in the JVM ecosystem, and languages like Clojure and Scala keep some excellent programmers interested in it. Clojure is an excellent Lisp, but with its interactive “repl” (read-eval-print-loop) and extremely strong expressivity, it is (ironically)  arguably the best way to learn Java. It has an outstanding community, a strong feature set, and some excellent code in the open-source world.

Lisp is also of strong fundamental importance, because its macro system is unlike anything else in any other language and will fundamentally alter how an engineer thinks about software, and because Lisp encourages people to use a very expressive style. It’s also an extremely productive language: large amounts of functionality can be delivered in a small amount of time. Lisp is a great language for learning the fundamentals of computing, and that’s one reason why Scheme has been traditionally used in education. (However, I’d probably advocate starting with Python because it’s easier to get to “real stuff” quickly in it. Structure and Interpretation of Computer Programs and Scheme should be presented when people know they’re actually interested in computing itself.)

When one’s writing large systems, Lisp isn’t the best choice, because interfaces matter at that point, and there’s a danger that people will play fast-and-loose with interfaces (passing nested maps and lists and expecting the other side to understand the encoding) in a way that can be toxic. Lisp is great if you trust the developers working on the project, but (sadly) I don’t think many companies remain in such a state as they grow to scale.

Also, static typing is a feature, not a drawback. Used correctly, static typing can make code more clear (by specifying interfaces) and more robust, in addition to the faster performance usually available in compiled, statically typed languages. ML and Haskell (which I didn’t list, but it’s a great language in its own right) can teach a person how to use static typing well.

So after Lisp, the 5th language to master is Scala. Why Scala, after learning all those others, and having more than enough tools to program in interesting ways? First of all, it has an incredible amount of depth in its type system, which attempts to unify the philosophies of ML and Java and (in my opinion) does a damn impressive job. The first half of Types and Programming Languages is, roughly speaking, the theoretic substrate for ML. But ML doesn’t have a lot of the finer features. It doesn’t have subtyping, for example. Also, the uniqueness constraint on record and discriminated union labels (necessary for full Hindley-Milner inference, but still painful) can have a negative effect on the way people write code. The second half of TAPL, which vanilla ML doesn’t really support, is realized in Scala. Second, I think Scala is the language that will salvage the 5 percent of object-oriented programming that is actually useful and interesting, while providing such powerful functional features that the remaining 95% can be sloughed away. The salvage project in which a generation of elite programmers selects what works from a variety of programming styles– functional, object-oriented, actor-driven, imperative– and discards what doesn’t work, is going to happen in Scala. So this is a great opportunity to see first-hand what works in language design and what doesn’t.

Scala’s a great language that also requires taste and care, because it’s so powerful. I don’t agree with the detractors who claim it’s at risk of turning into C++, but it definitely provides enough rope for a person to hang himself by the monads.

What’s most impressive about Clojure and Scala is their communities. An enormous amount of innovation, not only in libraries but also in language design, is coming out of these two languages. There is a slight danger of Java-culture creep in them, and Scala best practices (expected by the leading build environments) do, to my chagrin, involve directories called “src” and “main” and even seem to encourage singleton directories called “com”, but I’m willing to call this a superficial loss and, otherwise, the right side seems to be winning. There’s an incredible amount of innovation happening in these two languages that have now absorbed the bulk of the top Java developers.

Now… I mentioned “six languages” in this post’s title but named five. The sixth is one that very few programmers are willing to use in source code: English. (Or, I should say, the scientifically favored natural language of one’s locale.) Specifically, technical English, which requires rigor as well as clarity and taste. Written communication. This is more important than all of the others. By far. For that, I’m not complaining that software engineers are bad at writing. Competence is not a problem. Anyone smart enough to learn C++ or the finer points of Lisp is more than intelligent enough to communicate in a reasonable way. I’m not asking people to write prose that would make Faulkner cry; I’m asking them to explain the technical assets they’ve created at, at the least, the level of depth and rigor expected in a B+ undergraduate paper. The lack of writing in software isn’t an issue of capability, though, but of laziness.

Here’s one you hear sometimes: “The code is self-documenting.” Bullshit. It’s great when code can be self-documenting, making comments unnecessary, but it’s pretty damn rare to be solving a problem so simple that the code responsible for solving it is actually self-explanatory. Most problems are custom problems that require documentation of what is being solved, why, and how. People need to know, when they read code, what they’re looking at; otherwise, they’re going to waste a massive amount of time focusing on details that aren’t relevant. Documentation should not be made a crutch– you should also do the other important things like avoiding long functions and huge classes– but it is essential to write about what you’re doing. People need to stop thinking about software as machinery that “explains itself” and start thinking of it as writing a paper, with instructions for humans about what is happening alongside the code actually doing the work.

One of the biggest errors I encounter with regard to commenting is the tendency to comment minutiae while ignoring the big picture. There might be a 900-line program with a couple comments saying, “I’m doing this it’s O(n) instead of O(n^2)” or “TODO: remove hard-coded filename”, but nothing that actually explains why these 900 lines of code exist. Who does that help? These types of comments are useless to people who don’t understand what’s happening at all, which they generally won’t in the face of inadequate documentation. Code is much easier to read when one knows what one is looking at, and microcomments on tiny details that seemed important when the code was written are not helpful.

Comments are like static typing: under-regarded if not ill-appreciated because so few people use them properly, but very powerful (if used with taste) in making code and systems actually legible and reusable. Most real-world code, unfortunately, isn’t this way. My experience is that about 5 to 10 percent of code in a typical codebase is legible, and quite possibly only 1 percent is enjoyable to read (which good code truly is). The purpose of a comment should not be only to explain minutiae or justify weird-looking code. Comments should also ensure that people always know what they’re actually looking at.

The fallacy that leads to a lack of respect for documentation is that writing code is like building a car or some other well-understood mechanical system. Cars don’t come with a bunch of labels on all the pieces, because cars are fairly similar under the hood and a decent mechanic can figure out what is what. With software, it’s different. Software exists to solve a new problem; if it were solving an old problem, old software could be used. Thus, no two software solutions are going to be the same. In fact, programs tend to be radically different from one another. Software needs to be documented because every software project is inherently different, at least in some respects, from all the others.

There’s another problem, and it’s deep. The 1990s saw an effort, starting with Microsoft’s visual studio, to commoditize programmers. The vision was that, instead of programming being a province of highly-paid, elite specialists with a history of not working well with authority, software could be built by bolting together huge teams of mediocre, “commodity” developers, and directing them using traditional (i.e. pre-Cambrian) management techniques. This has begun to fail, but not before hijacking object-oriented programming, turning Java’s culture poisonous, and creating some of the most horrendous spaghetti code (MudballVisitorFactoryFactory) the world has ever seen. Incidentally, Microsoft is now doing a penance by having its elite research division investigate functional programming in a major way, the results being F# and a much-improved C#. Microsoft, on the whole, may be doomed to mediocrity, but they clearly have a research division that “gets it” in an impressive way. Still, that strikes me as too little, too late. The damage has be done, and the legacy of the commodity-developer apocalypse still sticks around.

The result of the commodity-programmer world is the write-only code culture that is the major flaw of siloized, large-program development. That, I think, is the fundamental problem with Java-the-culture, IDE-reliance, and the general lack of curiosity observed (and encouraged) among the bottom 80 percent of programmers. To improve as programmers, people need to read code and understand it, in order to get a sense of what good and bad code even are, but almost no one actually reads code anymore. IDEs take care of that. I’m not going to bash IDEs too hard, because they’re pretty much essential if you’re going to read a typical Java codebase, but IDE culture is, on the whole, a major fail that makes borderline-employable programmers out of people who never should have gotten in in the first place.

Another problem with IDE culture is that the environment becomes extremely high maintenance, between plugins that often don’t work well together, build system idiosyncracies that accumulate over time, and the various menu-navigation chores necessary to keep the environment sane (as opposed to command-line chores, which are easily automated). Yes, IDEs do the job: bad code becomes navigable, and commodity developers (who are terrified of the command line and would prefer not to know what “build systems” or “version control” even are) can crank out a few thousand lines of code per year. However, the high-maintenance environment requires a lot of setup work, and I think this is culturally poisonous. Why? For a contrast, in the command-line world, you solve your own problems. You figure out how to download software (at the command line using wget, not clicking a button) and install it. Maybe it takes a day to figure out how to set up your environment, but once you’ve suffered through this, you actually know a few things (and you usually learn cool things orthogonal to the problem you were originally trying to solve). When a task gets repetitive, you figure out how to automate it. You write a script. That’s great. People actually learn about the systems they’re using. On the other hand, in IDE-culture, you don’t solve your own problems because you can’t, because there it would take too long. In the big-program world, software too complex for people to solve their own problems is allowed to exist. Instead of figuring it out on your own, you flag someone down who understands the damn thing, or you take a screenshot of the indecipherable error box that popped up and send it to your support team. This is probably economically efficient from a corporate perspective, but it doesn’t help people become better programmers over time.

IDE culture also creates a class of programmers who don’t work with technology outside of the office– the archetypal “5:01 developers”– because they get the idea that writing code requires an IDE (worse yet, an IDE tuned exactly in line with the customs of their work environment). If you’re IDE-dependent, you can’t write code outside of a corporate environment, because when you go home, you don’t have a huge support team to set the damn thing up in a way that you’re used to and fix things when the 22 plugins and dependencies that you’ve installed interact badly.

There are a lot of things wrong with IDE culture, and I’ve only scratched the surface, but the enabling of write-only code creation is a major sticking point. I won’t pretend that bad code began with IDEs because that’s almost certainly not true. I will say that the software industry is in a vicious cycle, which the commodity-developer initiative exacerbated. Because most codebases are terrible, people don’t read them. Because “no one reads code anymore”, the bulk of engineers never get better, and continue to write bad code.

Software has gone through a few phases of what it means for code to actually be “turned in” as acceptable work. Phase 1 is when a company decides that it’s no longer acceptable to horde personal codebases (that might not even be backed up!) and mandates that people check their work into version control. Thankfully, almost all companies have reached that stage of development. Version control is no longer seen as “subversive” by typical corporate upper management. It’s now typical. The second is when a company mandates that code have unit tests before it can be relied upon, and that a coding project isn’t done until it has tests. Companies are reaching this conclusion. The third milestone for code-civilizational development, which very few companies have reached, is that the code isn’t done until you’ve taught users how to use it (and how to interact with it, i.e. instantiate the program and run it or send messages to it, in a read-eval-print-loop appropriate to the language). That teaching can be supplied at a higher level in wikis, codelabs, and courses… but it also needs to be included with the source code. Otherwise, it’s code out-of-context, which becomes illegible after a hundred thousand lines or so. Even if the code is otherwise good, out-of-context code without clear entry-points and big-picture documentation becomes incomprehensible around this point.

What do I not recommend? There’s no language that I’d say is categorically not worth learning, but I do not recommend becoming immersed in Java (except well enough to understand the innards of Clojure and Scala). The language is inexpressive, but the problem isn’t the language, and in fact I’d say that it’s unambiguously a good thing for an engineer to learn how the JVM works. It’s that Java-the-Culture (VisitorSelectionFactories, pointless premature abstraction, singleton directories called “com” that betray dripping contempt for the command line and the Unix philosophy, and build environments so borked that it’s impossible not to rely on an IDE) that is the problem; it’s so toxic that it reduces an engineer’s IQ by 2 points per month.

For each of these five programming languages, I’d say that a year of exposure is ideal and probably, for getting a generalist knowledge, enough– although it takes more than a year to actually master any of these. Use and improvement of written communication, on the other hand, deserves more. That’s a lifelong process, and far too important for a person not to start early on. Learning new programming languages and, through this, new ways of solving problems, is important; but the ability to communicate what problem one has solved is paramount.

Ambition and what it taught me: the 4-factor model.

Nine years ago, I came up with a card game called Ambition in which I attempted to remove card-luck from a trick-taking card game. This turned out to be a surprisingly difficult (and very fun) design problem. To give a 50,000-foot overview, the original game was one in which the goal was to get a middling score each round, making the objective more about manipulating the flow of the game (and the players) rather than trying to take (e.g. Bridge) or avoid (e.g. Hearts) tricks. The original game had only the middling objective, but as with Hearts and it’s “shooting the moon” reversal, I had to add high-risk, high-reward strategies for very strong (Slam) and very weak (Nil) hands. What I ended up building was a game where card-luck has a very small influence, because every hand has a credible strategy.

I’ve estimated that card-luck produces about 5% of the variation in a typical 2-hour game. (This could be reduced to 3-4% by reducing the Slam bonus, but that would also make the game less fun, so what would be the point?) For a trick-taking game, this is rare. Now, Bridge is an immensely skillful game, but it’s got a lot of card luck in the short term. For this reason, Bridge players duplicate the game in serious settings, which means that they play the same hands as everyone else in the club and are scored on their relative performance. A typical Bridge tournament might have 20 teams– or 40 people. I don’t think there are 40 Ambition players in a given state at any time, so duplication’s not an option.

Why’d I want to eliminate card luck from a trick-taking game? The short version of the story is that I had caught that German board game bug, but I was in Budapest for a semester (at this program) and had only a deck of cards. But I’d fallen in love with the German design aesthetic. Also, experience had led me to conclude that the games regarded as being the most interesting, and the ones that become culturally important, tend to be skillful games. Go, Chess, and Bridge are all very deep and skillful games, which makes their outcomes meaningful and indicative of genuine skill (or decisive). Poker becomes skillful with enough patience; viewed as one game played over a person’s life, it converges, as most games well. This led down the rabbit hole of “luck/skill balance”. What is it? Oddly enough, I concluded that it doesn’t exist, at least not when framed as a linear dichotomy.

The idea of “luck vs. skill” places Go (a very deep, skillful game) at one end of a continuum and Bingo (which is pure chance) at the other. As this ideology goes, luck games are cotton-candy junk food, and skill games are, if a bit dry, respectable and rewarding. Supporting this is that the culturally successful and long-lived “mind sports” tend to be highly skillful, which seems to imply that if you want to design a “good” game, you should be aiming to get rid of luck. The problem with the luck/skill dichotomy is that there are a number of game mechanics it fails to model. For a trivial example, Rock, Paper, Scissors contains no randomizer but (at one iteration) is effectively “random”, because it presents simultaneous decision-making with a perfectly-symmetrical strategic shape (i.e. no strategy is functionally different from any other). Rock, Paper, Scissors at one iteration can be considered to be effectively a luck game, so what about the iterated version. Is the long-term, iterated game luck-driven or skillful? That’s a surprisingly hard question to answer, even theoretically. For a more practical example, consider multi-player German-style favorites like Puerto Rico, an excellent game sometimes criticized for the influence of table position (i.e. the difference between sitting to the left vs. the right of the best player can have a measurable affect on one’s outcome). There are almost no random elements to this game, but play order becomes an influence. Is that aspect– knowing where to sit– luck or is it skill? (Answer: it’s meta-game.) But the biggest problem with the luck/skill dichotomy is that it breaks down completely when there are more than 2 players. In a 3-player game, an unpredictable, nonconventional, or outright incompetent player can make strategic choices that favor one player over the other– an effect deriving neither from a truly random element of the game (such as dice or a deck of cards) nor from those players’ individual levels of skill. This “interaction term” element is strategy: a mix of luck and skill inherent in simultaneous, poly-agent decision making.

The difference between a demonstration of skill and “strategic luck” is that the former will generally affect opponents’ outcomes in a non-biased way. If Alice does something that gives her an advantage over Bob and Eve both, she’s playing skillfully, not getting lucky. If she does something that unintentionally or chaotically gives Bob an advantage over Eve and Bob wins, that’s strategic luck favoring Bob.

In two-party games, there is no strategic luck. If the opponent’s strategy causes one to lose, that was (by definition) skillful play, not strategic interference. Likewise applying to two-team games (like Bridge) it is accurate to say that friendly “strategic luck” is skill.

However, in games of 3 or more players, it’s pretty much impossible to eliminate strategic luck (not that I’m convinced that it would be desirable to do so). This is reminiscent of Arrow’s Impossibility Theorem, which state that it’s impossible to design a “perfectly fair” voting system, where “fair” means that the presence or absence of a candidate C should not affect the relative performance of A and B (i.e. no “Nader effect”.) Games with three- or more players face an inherent trade-off between (a) restricting interactions between players, making the game less fun, versus (b) encouraging them but introducing strategic luck. So with large groups, it’s often better for a game designer to just own the strategic luck and make the alliance-forming (and -breaking) aspects a core element, as with Diplomacy or Apples to Apples.

This may be why the games that develop the mind sport culture always seem to be 2-party games. A game of 3 or more players without strategic luck would have to be structured too much like “multiplayer solitaire” to be fun, but one with strategic luck is unlikely to develop a tournament scene, as the cultural purpose of those is to determine “the best” player. (When there’s strategic luck, the best player can be undefined. Alice may be superior to Bob when Carla sits at the table, while Bob is better than Alice when Dan is playing.)

As for Ambition, I removed the card luck but I introduced some strategic luck. A “bad” hand can lead to a great outcome based on unrelated prior decisions by other players. Strategic luck is noticable. Which made it not quite like Go or Chess where a superior player can expect to win 95+ percent of the time, but more like a German-style game where pure chance factors are uncommon (you rarely feel “screwed” by the cards) but strategic luck is tolerated. And that’s fine. It adds to the fun, and it’s a casual game, after all.

Luck, skill, and strategy are 3 factors that determine players’ outcomes in a game. Pure chance elements can be isolated and assessed mathematically. Skill an usually be quantified, by observing players’ outcomes and adjusting, as with the ELO system. As for strategy? It’s completely impossible to quantify this element in a general way, because the strategic variables within a game are, in some sense, the spinal shape of the game itself. Pure chance elements can be analyzed through statistical means, but there’s no general-purpose way to measure strategic luck. I’m not sure if I can even precisely define it.

I said there would be 4 factors, so what’s the fourth? The most interesting one, which I call flux. To explain flux, consider one important observation pertaining to supposedly “purely skillful” games: they don’t have the same outcome every time they’re played. If they did, they’d actually be frustrating and boring, even for nearly exactly matched players. Thankfully, that’s not the case. Alice defeating Bob does not mean that Alice will always beat Bob. What this means is that there’s something subtle– an energy– that makes the game a real contest when it’s played between players who are close in skill level.

Flux is minute-to-minute variation in a player’s skill and strategic effectiveness. Positive flux is being “in flow”– the state of consciousness that makes games (and programming, and writing, and many other things) fun. It’s a state of mind in which a person has above-normal concentration, confidence, ability to assess risk, and effectiveness in execution. Negative flux is the opposite, and it’s described by poker players as being “on tilt”. It’s being out of flow. When players of equal or near-equal skill compete, it’s often flux that determines the winner. And that’s what makes such contests exciting– the fact that the game is skillful and decisive (so the outcome actually matters) but that, because the contestants are close in skill level, the end-of-game binary outcome (winning vs. losing) is going to be determined by minute-to-minute fluctuations in animal energies. Flow. Flux. “The zone.”

Luck, skill and strategy are all important tools in a game designer’s arsenal as he pursues his design goal (which is not to land at a targeted point on some bullshit “luck/skill continuum”, but to design a game that’s fun to play). Luck gives more players a chance at . Skillful elements make the game decisive and more . Strategy, on the other hand, is what makes multiplayer games interactive and social. All of these elements can be quite effective at making a game fun. But it’s the tense real-time drama of flux as players go into and drop out of flow that really makes a game interesting.

Don’t waste your time in crappy startup jobs.

What I’m about to say is true now, as of July 2012. It wasn’t necessarily true 15 years ago, and it may not be true next year. Right now, for most people, it’s utterly correct– enough that I feel compelled to say it. The current VC-funded startup scene, which I’ve affectionately started calling “VC-istan”, is– not to be soft with it– a total waste of time for most of the people involved.

Startups. For all the glamour and “sexiness” associated with the concept, the truth is that startups are no more and no less than what they sound like: new, growing businesses. There are a variety of good and bad reasons to join or start businesses, but for most of human history, it wasn’t viewed as a “sexy” process. Getting incorporated, setting up a payroll system, and hiring accountants are just not inspiring duties for most people. They’re mundane tasks that people are more than willing to do in pursuit of an important goal, but starting a business has not typically been considered to be  inherently “sexy”. What changed, after about 1996, is that people started seeing ”startups” as an end in themselves. Rather than an awkward growth phase for an emerging, risky business, “startup” became a lifestyle. This was all fine because, for decades, positions at established businesses were systemically overvalued by young talent, and those at growing small companies were undervalued. It made economic sense for ambitious young people to brave the risk of a startup company. Thus, the savviest talent gravitated toward the startups, where they had access to responsibilities and career options that they’d have to wait for years to get in a more traditional setting.

Now, the reverse seems to be true. In 1995, a lot of talented young people went into large corporations because they saw no other option in the private sector– when, in fact, there were credible alternatives, startups being a great option. In 2012, a lot of young talent is going into startups for the same reason: a belief that it’s the only legitimate work opportunity for top talent, and that their careers are likely to stagnate if they work in more established businesses. They’re wrong, I think, and this mistaken belief allows them to be taken advantage of. The typical equity offer for a software engineer is dismally short of what he’s giving up in terms of reduced salary, and the career path offered by startups is not always what it’s made out to be.

For all this, I don’t intend to argue that people shouldn’t join startups. If the offer’s good, and the job looks interesting, it’s worth trying out. I just don’t think that the current, unconditional “startups are awesome!” mentality serves us well. It’s not good for any of us, because there’s no tyrant worse than a peer selling himself short, and right now there are a lot of great people selling themselves very short for a shot at the “startup experience”– whatever that is.

Here are 7 misconceptions about startups that I’d like to dispel.

1. A startup will make you rich. True, for founders, whose equity shares are measured in points. Not true for most employees, who are offered dimes or pennies.

Most equity offerings for engineers are, quite frankly, tiny. A “nickel” (0.05 percent) of an 80-person business is nothing to write home about. It’s not partnership or ownership. Most engineers have the mistaken belief that the initial offering is only a teaser, and that it will be improved once they “prove themselves”, but it’s pretty rare that this actually happens.

Moreover, raises and bonuses are very uncommon in startups. It’s typical for high performers to be making the same salary after 3 years as they earned when they started. (What happens to low performers, and to high performers who fail politically? They get fired, often with no warning or severance.) Substantial equity improvements are even rarer. When things are going well in a startup, the valuation of the equity package is increasing and that is the raise. When things are going badly, that’s the wrong time to be asking for anything.

There are exceptions. One is that, if the company finds itself in very tough straits and can’t afford to pay salaries at all, it will usually grant more equity to employees in order to make up for the direct economic hardship it’s causing them by not being able to pay a salary. This isn’t a good situation, because the equity is usually offered at-valuation (more specifically, at the valuation of the last funding round, when the company was probably in better shape) and typically employees would be better off with the cash. Another is that it’s not atypical for a company to “refresh” or lengthen a vesting period with a proportionate increase. A 0.1% grant, vesting over four years, can be viewed as compensation at 0.025% per year. It’s not atypical for a company to continue that same rate in the years after that. That means that a person spending six years might get up to 0.15%. What is atypical is for an employee brought in with 0.1% to be raised to 1% because of good performance. The only time that happens is when there’s a promotion involved, and internal promotions (more on this, later) are surprisingly rare in startups.

2. The “actual” valuation is several times the official one. This is a common line, repeated both by companies in recruiting and by engineers justifying their decision to work for a startup. (“My total comp. is actually $250,000 because the startup really should be worth $5 billion.) People love to think they’re smarter than markets. Usually, they aren’t. Moreover, the few who are capable of being smarter than markets are not taking (or trying to convince others to take) junior-level positions where the equity allotment is 0.05% of an unproven business. People who’ve legitimately developed that skill (of reliably outguessing markets) deal at a much higher level than that.

So, when someone says, “the actual valuation should be… “, it’s reasonable to conclude with high probability that this person doesn’t know what the fuck he or she is talking about.

In fact, an engineer’s individual valuation should, by rights, be substantially lower than the valuation at which the round of funding is made. When a VC offers $10 million for 20% of a business, the firm is stating that it believes the company (pre-money) is worth $40 million to them. Now, startup equity is always worth strictly more (and by a substantial amount) to a VC than it is worth to an engineer. So the fair economic value (for an engineer) of a 0.1% slice is probably not $40,000. It might be $10-20,000.

There are several reasons for this disparity of value. First, the VC’s stake gives them control. It gives them board seats, influence over senior management, and the opportunity to hand out a few executive positions to their children or to people whom they owe favors. An engineer’s 0.1% slice, vesting over four years, doesn’t give him any control, respect, or prestige. It’s a lottery ticket, not a vote. Second, startup equity is a high-risk asset, and VCs have a different risk profile from average people. An average person would rather have a guarantee of $2 million than a 50% chance of earning $5 million, even though the expected value of the latter offer is higher. VCs, in general, wouldn’t, because they’re diversified enough to take the higher-expectancy, riskier choices. Third, the engineer has no protection against dilution, and will be on the losing side of any preference structure that the investors have set up (and startups rarely volunteer information pertaining to what preferences exist against common stock, which is what the engineers will have). Fourth, venture capitalists who invest in highly successful businesses get prestige and huge returns on investment, whereas mere employees might get a moderate-sized windfall, but little prestige unless they achieved an executive position. Otherwise, they just worked there.

In truth, startup employees should value equity and options at about one-fourth the valuation that VCs will give it. If they’re giving up $25,000 per year in salary, they should only do so in exchange for $100,000 per year (at current valuation) in equity. Out of a $40-million company with a four-year vesting cycle, that means they should ask for 1%.

3. If you join a startup early, you’re a shoe-in for executive positions. Nope.

Points #1-2 aren’t going to surprise many people. Most software engineers know enough math to know that they won’t get filthy rich on their equity grants, but join startups under the belief that coming into the company early will guarantee a VP-level position at the company (at which point compensation will improve) once it’s big. Not so. In fact, one of the best ways not to get a leadership position in a startup is to be there early.

Startups often involve, for engineers, very long hours, rapidly changing requirements, and tight deadlines, which means the quality of the code they write is generally very poor in comparison to what they’d be able to produce in saner conditions. It’s not that they’re bad at their jobs, but that it’s almost impossible to produce quality software under those kinds of deadlines. So code rots quickly in a typical startup environment, especially if requirements and deadlines are being set by a non-technical manager. Three years and 50 employees later, what they’ve built is now a horrific, ad-hoc, legacy system hacked by at least ten people and built under intense deadline pressure, and even the original architects don’t understand it. It may have been a heroic effort to build such a powerful system in so little time, but from an outside perspective, it becomes an embarrassment. It doesn’t make the case for a high-level position.

Those engineers should, by rights, get credit and respect for having built the system in the first place. For all its flaws, if the system works, then the company owes no small part of its success to them. Sadly, though, the “What have you done for me lately?” impulse is strong, and these engineers are typically associated with how their namesake projects end (as deadline-built legacy monstrosities) rather than what it took to produce them.

Moreover, the truth about most VC-funded startups is that they aren’t technically deep, so it seems to most people that it’s marketing rather than technical strength that determines which companies get off the ground and which don’t. The result of this is that the engineer’s job isn’t to build great infrastructure that will last 10 years… because if the company fails on the marketing front, there will be no “in 10 years”. The engineer’s job is to crank out features quickly, and keep the house of cards from falling down long enough to make the next milestone. If this means that he loads up on “technical debt”, that’s what he does.

If the company succeeds, it’s the marketers, executives, and biz-dev people who get most of the glory. The engineers? Well, they did their jobs, but they built that disliked legacy system that “just barely works” and “can’t scale”. Once the company is rich and the social-climbing mentality (of always wanting “better” people) sets in, the programmers will be replaced with more experienced engineers brought in to “scale our infrastructure”. Those new hires will do a better job, not because they’re superior, but because the requirements are better defined and they aren’t working under tight deadline pressure. When they take what the old-timers did and do it properly, with the benefit of learning from history, it looks like they’re simply superior, and managerial blessing shifts to “the new crowd”. The old engineers probably won’t be fired, but they’ll be sidelined, and more and more people will be hired above them.

Furthermore, startups are always short on cash and they rarely have the money to pay for the people they really want, so when they’re negotiating with these people in trying to hire them, they usually offer leadership roles instead. When they go into the scaling phase, they’re typically offering $100,000 to $150,000 per year for an engineer– but trying to hire people who would earn $150,000 to $200,000 at Google or on Wall Street. In order to make their deals palatable, they offer leadership roles, important titles and “freedom from legacy” (which means the political pull to scorched-earth existing infrastructure if they dislike it or it gets in their way) to make up for the difference. If new hires are being offered leadership positions, this leaves few for the old-timers. The end result of this is that the leadership positions that early engineers expect to receive are actually going to be offered away to future hires.

Frankly put, being a J.A.P. (“Just A Programmer”) in a startup is usually a shitty deal. Unless the company makes unusual cultural efforts to respect engineering talent (as Google and Facebook have) it will devolve into the sort of place where people doing hard things (i.e. software engineers) get the blame and the people who are good at marketing themselves advance.

4. In startups, there’s no boss. This one’s patently absurd, but often repeated. Those who champion startups often say that one who goes and “works for a company” ends up slaving away for “a boss” or “working for The Man”, whereas startups are a path to autonomy and financial freedom.

The truth is that almost everyone has a boss, even in startups. CEOs have the board, the VPs and C*Os have the CEO, and the rest have actual, you know, managers. That’s not always a bad thing. A competent manager can do a lot for a person’s career that he wouldn’t realistically be able to do on his own. Still, the idea that joining a startup means not having a boss is just nonsense.

Actually, I think founders often have the worst kind of “boss” in venture capitalists. To explain this, it’s important to note that the U.S. actually has a fairly low “power distance” in professional workplaces– this is not true in all cultures– by which I mean bosses aren’t typically treated as intrinsic social superiors to their direct reports. Yes, they have more power and higher salaries, but they’re also older and typically have been there for longer. A boss who openly treats his reports with contempt, as if he were innately superior, isn’t going to last for very long. Also, difficult bosses can be escaped: take another job. And the most adverse thing they can (legally) do is fire someone, which has the same effect. Beyond that, bosses can’t legally have a long-term negative effect on someone’s career.

With VCs, the power distance is much greater and the sense of social superiority is much stronger. For example, when a company receives funding it is expected to pay both parties’ legal fees. This is only a minor expenditure in most cases, but it exists to send a strong social message: you’re not our kind, dear, and this is what you’ll deal with in order to have the privilege of speaking with us at all. 

This is made worse by the incestuous nature of venture capital, which leads to the worst case of groupthink ever observed in a supposedly progressive, intelligent community. VCs like a startup if other VCs like it. The most well-regarded VCs all know each other, they all talk to each other, and rather than competing for the best deals, they collude. This leaves the venture capitalists holding all the cards. A person who turns down a term sheet with multiple liquidation preferences and participating preferred (disgusting terms that I won’t get into because they border on violence, and I’d prefer this post to be work-safe) is unlikely to get another one.

A manager who presents a prospective employee with a lowball offer and says, “If you don’t take this, I’ll make a phone call and no one in the industry will hire you” is breaking the law. That’s extortion. In venture capital? They don’t have to say this. It’s unspoken that if you turn down a terrible term sheet with a 5x liquidation preference, you’re taking a serious risk that a phone call will be made and that supposedly unrelated interest will dry up as well. That’s why VCs can get away with multiple liquidation preferences and participating preferred.

People who really don’t want to have “a boss” should not be looking into VC-funded startups. There are great, ethical venture capitalists who wouldn’t go within a million miles of the extortive shenanigans I’ve described above. It’s probably true that most are. Even still, the power relationship between a founder and investor is far more lopsided than that between a typical employee and manager. No manager can legally disrupt an employee’s career outside of one firm; but venture capitalists can (and sometimes do) block people from being fundable.

Instead, those who really want not to have a boss should be thinking about smaller “lifestyle” businesses in which they’ll maintain a controlling interest. VC has absolutely no interest in funding these sorts of companies, so this is going to require angel investment or personal savings, but for those who really want that autonomy, I think this is the best way to go.

For all this, what I’ve said here about the relationship between founders and VCs isn’t applicable to typical engineers. An engineer joining a startup of larger than about 20 people will have a manager, in practice if not in reality. That’s not a bad thing. It’s no worse or better than it would be in any other company. It does make the “no boss” vs. “working for The Man” selling point of startups a bit absurd, though.

5. Engineers at startups will be “changing the world”. With some exceptions, startups are generally not vehicles for world-changing visions. Startups need to think about earning revenue within the existing world, not “changing humanity as we know it”.

“The vision thing” is an aspect of the pitch that is used to convince 22-year-old engineers to work for 65 percent of what they’d earn at a more established company, plus some laughable token equity offering. It’s not real.

The problem with changing the world is that the world doesn’t really want to change, and to the extent that it it’s willing to do so, few people who have the resources necessary to push for improvements. What fundamental change does occur is usually gradual– not revolutionary– and requires too much cooperation to be forced through by a single agent.

Scientific research changes the world. Large-scale infrastructure projects change the world. Most businesses, on the other hand, are incremental projects, and there’s nothing wrong with that. Startups are not a good vehicle for “changing the world”. What they are excellent at is finding ways to profit from inexorable, pre-existing trends by doing things that (a) have recently become possible, but that (b) no one had thought of doing (or been able to do) before. By doing so, they often improve the world incrementally: they wouldn’t survive if they didn’t provide value to someone. In other words, most of them are application-level concepts that fill out an existing world-changing trend (like the Internet) but not primary drivers. That’s fine, but people should understand that their chances of individually effecting global change, even at a startup, are very small.

6. If you work at a startup, you can be a founder next time around. What I’ve said so far is that it’s usually a shitty deal to be an employee at a startup: you’re taking high risk and low compensation for a job that (probably) won’t make you rich, lead to an executive position, bring great autonomy, or change the world. So what about being a founder? It’s a much better deal. Founders can get rich, and they will make important connections that will set up their careers. So why aren’t more people becoming founders of VC-funded startups? Well, they can’t. Venture capital acceptance rates are well below 1 percent.

The deferred dream is probably the oldest pitch in the book, so this one deserves address. A common pitch delivered to prospective employees in VC-istan is that “this position will set you up to be a founder (or executive) at your next startup”. Frankly, that’s just not true. The only thing that a job can offer that will set a person up with the access necessary to be a founder in the future is investor contact, and a software engineer who insists on investor contact when joining an already-funded startup is going to be laughed out the door as a “prima donna”.

A non-executive position without investor contact at a startup provides no more of the access that a founder will need than any other office job. People who really want to become startup founders are better off working in finance (with an aim at venture capital) or pursuing MBA programs than taking subordinate positions at startups.

7. You’ll learn more in a startup. This last one can be true; I disagree with the contention that it’s always true. Companies tend to regress to the mean as they get bigger, so the outliers on both sides are startups. And there are things that can be learned in the best small companies when they are small that can’t be learned anywhere else. In other words, there are learning opportunities that are very hard to come by outside of a startup.

What’s wrong here is the idea that startup jobs inherently more educational simply because they exist at startups. There’s genuinely interesting work going on at startups, but there’s also a hell of a lot of grunt work, just like anywhere else. On the whole, I think startups invest less in career development than more established companies. Established companies have had great people leave after 5 years, so they’ve had more than enough time to “get it” on the matter of their best people wanting more challenges. Startups are generally too busy fighting fires, marketing themselves, and expanding to have time to worry about whether their employees are learning.

So… where to go from here?

I am not trying to impart the message that people should not work for startups. Some startups are great companies. Some pay well and offer career advancement opportunities that are unparalleled. Some have really great ideas and, if they can execute, actually will make early employees rich or change the world. People should take jobs at startups, if they’re getting good deals.

Experience has led me to conclude that there isn’t much of a difference in mean quality between large and small companies, but there is a lot more variation in the small ones, for rather obvious reasons. The best and worst companies tend to be startups. The worst ones don’t usually live long enough to become big companies, so there’s a survivorship bias that leads us to think of startups as innately superior. It’s not the case.

As I said, the worst tyrant in a marketplace is a peer selling himself short. Those who take terrible deals aren’t just doing themselves a disservice to themselves, but to all the rest of us as well. The reason young engineers are being offered subordinate J.A.P. jobs with 0.03% equity and poorly-defined career tracks is because there are others who are unwise enough to take them.

In 2012, is there “a bubble” in internet startups? Yes and no. In terms of valuations, I don’t think there’s a bubble. Or, at least, it’s not obvious to me that one exists. I think it’s rare that a person who’s relatively uninformed (such as myself, when it comes to pricing technology companies) can outguess a market, and I see no evidence that the valuations assigned to these companies are unreasonable. Where there is undeniably a bubble is in the extremely high value that young talent is ascribing to subordinate positions at mediocre startups.

So what is a fair deal, and how does a person get one? I’ll give some very basic guidelines.

1. If you’re taking substantial financial risk to work at the company, you’re a Founder. Expect to be treated like one. By “substantial financial risk”, I mean earning less  than (a) the baseline cost-of-living in one’s area or (b) 75% of one’s market compensation.

If you’re taking that kind of risk, you’re an investor and you better be seen as a partner. It means you should demand the autonomy and respect given to a founder. It means not to take the job unless there’s investor contact. It means you have a right to know the entire capitalization structure (an inappropriate question for an employee, but a reasonable one for a founder) and determine if it’s fair, in the context of a four-year vesting period. (If the first technical hire gets 1% for doing all the work and the CEO gets 99% because he has the connections, that’s not fair. If the first technical hire gets 1% while the CEO gets 5% and the other 94% has been set aside for employees and investors, and the CEO has been going without salary for a year already, well, that’s much more fair.) It means you should have the right to represent yourself to the public as a Founder.

2. If you have at least 5 years of programming experience and the company isn’t thoroughly “de-risked”, get a VP-level title. An early technical hire is going to be spending most of his time programming– not managing or sitting in meetings or talking with the press as an “executive” would. Most of us (myself included) would consider that arrangement, of getting to program full-time at high productivity, quite desirable. This might make it seem like “official” job titles (except for CEO) don’t matter and that they aren’t worth negotiating for. Wrong.

Titles don’t mean much when 4 people at the company. Not in the least. So get that VP-level title locked-in now, before it’s valuable and much harder to get. Once there are more than about 25 people, titles start to have real value and for a programmer to ask for a VP title might seem like an unreasonable demand.

People may claim that titles are old-fashioned and useless and elitist, and they often have strong points behind their claims. Still, people in organizations place a high value on institutional consistency (meaning that there’s additional cognitive load for them to contradict the company’s “official” statements, through titles, about the status of its people) and the high status, however superficial and meaningless, conferred by an impressive title can easily become self-perpetuating. As the company becomes larger and more opaque, the benefit conferred by the title increases.

Another benefit of having a VP-level title is the implicit value inherent of being VP of something. It means that one will be interpreted as representing some critical component of the company. It also makes it embarrassing to the top executives and the company if this person isn’t well treated. For an example, let’s take “VP of Culture”. Doesn’t it sound like a total bullshit title? In a small company, it probably is. So meaningless, in fact, that most CEOs would be happy to give it away. “You want to be ‘VP of Culture’, but you’ll be doing the same work for the same salary? By all means.”  Yet what does it mean if a CEO berates the VP of Culture? That culture isn’t very important at this company. What about if the VP of Culture is pressured to resign or fired? From a public view, the company just “lost” its VP of Culture. That’s far more indicative than if a “J.A.P.” engineer leaves.

More relevantly, a VP title puts an implicit limit on the number of people who can be hired above a person, because most companies don’t want the image of having 50 of their 70 people being “VP” or “SVP”. It dilutes the title, and makes the company look bloated (except in finance, where “VP” is understood to represent a middling and usually not executive level.) If you’re J.A.P., the company is free to hire scads of people above you. If you’re a VP, anyone hired above you has to be at least a VP, if not an SVP, and companies tend to be conservative with those titles once they start to actually matter.

The short story to this is that, yes, titles are important and you should get one if the company’s young and not yet de-risked. People will say that titles don’t mean anything, and that “leadership is action, not position”, and there’s some truth in that, but you want the title nonetheless. Get it early when it doesn’t matter, because someday it will. And if you’re a competent mid-career (5+ years) software engineer and the company’s still establishing itself, then having some VP-level title is a perfectly reasonable term to negotiate.

3. Value your equity or options at one-fourth of the at-valuation level.  This has been discussed above. Because this very risky asset is worth much more to diversified, rich investors than it is to an employee, it should be discounted by a factor of 3-4. This means that it’s only worth it to take a job at $25,000 below market in exchange for $100,000 per year in equity or options (at valuation).

Also worth keeping in mind is that raises and bonuses are uncommon in startups, and that working at a startup can have an affect on one’s salary trajectory. Realistically, a person should assess a startup offer in the light of what he expects to earn over the next 3 to 5 years, not what he can command now.

4. If there’s deferred cash involved, get terms nailed down. This one doesn’t apply to most startups, because it’s an uncommon arrangement after a company is funded for it to be paying deferred cash. Usually, established startups pay a mix of salary and equity.

If deferred cash is involved in the package, it’s important to get a precise agreement on when this payment becomes due. Deferred cash is, in truth, zero-interest debt of the company to the employee. Left to its own devices, no rationally acting company would ever repay a zero-interest loan. So this is important to get figured out. What events make deferred cash due? (Startups never have “enough” money, so “when we have enough” is not valid.) What percentage of a VC-funding round is dedicated to pay off this debt? What about customer revenue? It’s important to get a real contract to figure this out; otherwise, the deferred payment is just a promise, and sadly those aren’t always worth much.

The most important matter to address when it comes to deferred cash is termination, because being owed money by a company one has left (or been fired from) is a mess. No one ever expects to be fired, but good people get fired all the time. In fact, there’s more risk of this in a small company, where transfers tend to be impossible on account of the firm’s small size, and where politics and personality cults can be a lot more unruly than they are in established companies.

Moreover, severance payments are extremely uncommon in startups. Startups don’t fear termination lawsuits, because those take years and startups assume they will either be (a) dead, or (b) very rich by the time any such suit would end– and either way, it doesn’t much matter to them. Being fired in established companies usually involves a notice (“improvement plan”) period (in which anyone intelligent will line up another job) or severance, or both, because established companies really don’t want to deal with termination lawsuits. In startups, people who are fired usually get neither notice nor severance.

People tend to think that the risk of startups is limited to the threat of them going out of business, but the truth is that they also tend to fire a lot more people, and often with less justification for doing so. This isn’t always a bad thing (firing too few people can be just as corrosive as firing too many) but it is a risk people need to be aware of.

I wouldn’t suggest asking for a contractual severance arrangement in negotiation with a startup; that request will almost certainly be denied (and might be taken as cause to rescind the offer). However, if there’s deferred cash involved, I would ask for a contractual agreement, if there is deferred cash, that it becomes due immediately on event of involuntary termination. Day-of, full amount, with the last paycheck.

5. Until the company’s well established (e.g. IPO) don’t accept a “cliff” without a deferred-cash arrangement in event of involuntary termination. The “cliff” is a standard arrangement in VC-funded startups whereby no vesting occurs if the employee leaves or is fired in the first year. The problem with the cliff is that it creates a perverse incentive for the company to fire people before they can collect any equity.

Address the cliff as follows. If employee is involuntarily terminated, and the cliff is enforced, whatever equity would have vested is converted (at most recent valuation) to cash and due upon date of termination.

This is a non-conventional term, and many startups will flat-out refuse it. Fine. Don’t work for them. This is important; the last thing you want is for the company to have an incentive to fire you because of a badly-structured compensation package.

6. Keep moving your career forward. Just being “at a startup” is not enough. The most credible appeal of working at a startup is the opportunity to learn a lot, and one can, it’s not a guarantee. Startups tend to be more “self-serve” in terms of career development. People who go out of their way to explore and use new technologies and approaches to problems will learn a lot. People who let themselves get stuck with the bulk of the junior-level grunt work won’t.

I think it’s useful to explicitly negotiate project allocation after the first year– once the “cliff” period is over. Raises being rare at startups, the gap between an employee’s market value and actual compensation is only growing as time goes by. When the request for a raise is denied is a good time to bring up the fact that you really would like to be working on that neat machine learning project or that you’re really interested in trying out a new approach to a problem the company faces.

7. If blocked on the above, then leave. The above are reasonable demands, but they’re going to meet some refusal because there’s no shortage of young talent that is right now willing to take very unreasonable terms for the chance to work “at a startup”. So expect some percentage of these negotiations to end in denial, even to the point of rescinded job offers. For example, some startup CEOs will balk at the idea that a “mere” programmer, even if he’s the first technical hire, wants investor contact. Well, that’s a sign that he sees you as “J.A.P.” Run, don’t walk, away from him.

People tend to find negotiation to be unpleasant or even dishonorable, but everyone in business negotiates. It’s important. Negotiations are indicative, because in business politeness means little, and so only when you are negotiating with someone do you have a firm sense of how he really sees you. The CEO may pay you a million compliments and make a thousand promises about your bright future in the company, but if he’s not willing to negotiate a good deal, then he really doesn’t see you as amounting to much. So leave, instead of spending a year or two in a go-nowhere startup job.

In the light of this post’s alarmingly high word count, I think I’ll call it here. If the number of special cases and exceptions indicates a lack of a clear message, it’s because there are some startup jobs worth taking, and the last thing I want to do is categorically state that they’re all a waste of time. Don’t get me wrong, because I think most of VC-istan (especially in the so-called “social media” space) is a pointless waste of talent and energy, but there are gems out there waiting to be discovered. Probably. And if no one worked at startups, no one could found startups and there’d be no new companies, and that would suck for everyone. I guess the real message is: take good offers and work good jobs (which seems obvious to the point of uselessness) and the difficulty (as observed in the obscene length of this post) is in determining what’s “good”. That is what I, with my experience and observations, have attempted to do.