The rent really is too damn high, and the surprising cause.

A thought experiment.

It’s Monday morning, and on the drive to work, you realize that you’re low on fuel. At the gas station, you discover that gasoline (petrol) now costs $40 per gallon ($10.56 per liter). That seems bizarre and expensive, so you don’t buy any. You pass that station and find another one, but you find the same high prices. Curious about what’s happening and where you might get a better deal, you call people in other cities and find out that the price of gasoline has gone up enormously overnight, but no one seems to know why. Gas below $35 per gallon cannot be found. Assuming that we’re not in the midst of hyperinflation, which of the following three possibilities seems most likely?

  1. Income hypothesis: people suddenly have a lot more money. This isn’t inflation or a price hike, but a result of an increase in genuine wealth. The economy has grown several thousand percent overnight.
  2. Value hypothesis: the real value of gasoline has improved, presumably due to a new technology that can extract substantially more genuine utility out of the fuel. Perhaps 300-mile-per-gallon flying cars have just come online and people are now taking international trips.
  3. Calamity hypothesis: something bad has happened that has either constricted the supply of gasoline or created a desperate need to consume it, increasing demand. The sudden price hike is a consequence of this.

Most people would choose the third option, and they’d probably be right. Note that, from a broad-based aggregate perspective, the first two options represent “good” possibilities– price increases resulting from desirable circumstances– while the third of these is genuinely bad for society. I’ll get back to that in a second.

For gasoline, people understand this. From a stereotypical American perspective, high gasoline prices are “bad” and represent “the bad guys” (greedy oil CEOs, OPEC) winning. Even moderates like me who believe gas prices should be raised (through taxation) to account for externalized environmental costs are depicted as “radical environmentalists”. People tend to emotionalize and moralize prices: stock prices are “the good guys”– those “short sellers”, in this mythology, are just really bad people– gas prices are “the bad guys”, a strong dollar is God’s will made economics, and no one seems to care too strongly about the other stuff (bonds, metals, interest rates).

I don’t think that any of these exchange-traded commodities deserve a “good guys”/”bad guys” attitude, because investors are free to take positions as they wish. If they believe that oil is “too expensive”, they can buy futures. If they think a publically-traded company is making “too much profit”, they can buy shares. (The evil of corporate America isn’t that companies make too much profit. It’s that the profits they do make often involve externalizing costs, and that they funnel money that should be profit, or put into higher wages, to well-connected non-producing parasites called “executives”.)

There is one price dynamic that has a strong moral component (in terms of its severe effect on peoples’ lives and the environment) and it’s also one where most Americans get “the good guys” wrong. It’s the price of housing. Housing prices represent “the good guys” in the popular econo-mythology, because Americans reflexively associate homeownership with future-orientation and virtue, but I disagree with this stance on the price of housing. People mistakenly believe that the average American homeowner is long on housing (i.e. he benefits if housing prices go up). Wrong. The resale value of his house is improving, but so is the cost of the housing (rental or another purchase) that he will have to pay if he were ever to sell it. A person’s real position on housing is how much housing the person owns, minus the amount that person must consume.  In reality, renters are (often involuntarily) in a short position on housing, resident homeowners are flat (i.e. neutral) but become short if they need to buy more housing (e.g. after having children) in the future. In reality, even most single home owners would benefit more from low housing prices and rents than high housing cost levels, because they incur more property taxes and transaction costs. The few who benefit when real estate becomes more expensive are those who least need the help. In sum, the world is net short on real estate prices and rents. If nothing else changes, it’s bad when rents and house prices go up.

If houses become more expensive because of corresponding increases in value, or because incomes increase, the increased cost of housing is an undesirable side effect of an otherwise good thing, and the net change for society is probably positive. These correspond to the first two hypotheses (income and value) that I proposed in the gas-price scenario. I think that these hypotheses explain much of why Americans tend to assume, reflexively, that rising house prices in a given location are a good thing: the assumption is that the location actually is improving, or that the economy is strong (it must be, if people can afford those prices). Rarely is the argument made that the root cause might be undesirable in nature.

An unusual position.

Here’s where I depart from convention. Most people who are well-informed know that high housing costs are an objectively bad thing: the rent really is too damn high. Few would argue, however, that the high rents in Manhattan and Silicon Valley are caused by something intrinsically undesirable, and I will. Most attribute the high prices to the desirability of the locations, or to the high incomes there. Those play a role, but not much of one– not enough to justify costs that are 3 to 10 times baseline. (The median income in Manhattan is not 10 times the national average.) I think that high housing costs. relative to income, almost invariably stem from a bad cause and, at the end of this exposition, I’ll name it. Before that, let’s focus on the recent housing problems observed in New York and Silicon Valley, where rents can easily consume a quarter to a half of a working professional’s income, and buying a family-ready apartment or house is (except for the very wealthy) outright unaffordable. Incomes for working people in these regions are high, but not nearly high enough to justify the prices. Additionally, Manhattan housing costs are rising even in spite of the slow demise of Wall Street bonuses. So the income hypothesis is out. I doubt the value hypothesis as well, because although Manhattan and San Francisco are desirable places to live and will always a command a premium for that, “desirability” factors (other than proximity to income) just don’t change with enough speed or magnitude to justify what happens on the real estate market. For example, the claim that California’s high real estate prices are caused by a “weather premium” is absurd; California had the same weather 20 years ago, when prices were reasonable. Desirability factors are invented to justify price moves observed on the market (“people must really want to live here because <X>”) but are very rarely the actual movers, because objectively desirable places to live (ignoring the difficulty of securing proximity to income, which is what keeps urban real estate expensive) are not uncommon. So what is causing the rents, and the prices, to rise?

Price inelasticity.

Consider the gasoline example. Is $40 per gallon gasoline plausible? Yes. It’s unlikely, but with the right conditions, that could easily happen. Gasoline is an inelastic good, which means that small changes in the available supply will cause large movements in price. Most Americans, if they were asked what the price effect of a 2-percent drop in gasoline (or petroleum) supply would be, would expect a commensurate (2%) increase– possibly 5 or 10 percent on account of “price gouging”. That’s wrong. The price could double. The inelasticity associated with petroleum and related products was observed directly in the 1970s “oil shocks”. 

Necessities (such as medicines, addictive drugs, housing, fuel, and food) exhibit this inelasticity. What price inelasticity means is that reducing (or in the case of real estate, damaging) the quantity supplied increases the aggregate price of the total stock, usually at the expense of society. For a model example, consider a city with 1 million housing units for which the market value of living there (as reflected in the rent) is $1,000 per month. If we value each unit at 150 times annual rent, the total real estate value is $150 billion, and we’d expect prices to be around that number. Now, let’s assume that 50,000 units of housing (5 percent) are destroyed in a catastrophe (crime epidemic, rapid urban decay, a natural disaster). What happens? A few people will leave the city, but most won’t, because their jobs will still be there. Many will stay, and will be on the market for a smaller supply of housing. Rents will increase, and given the inelasticity of housing, a likely number is $2,000 per month. Now each housing unit is worth $300,000, and with 950,000 of them, the total real estate value of the city is $285 billion. Did the city magically become a better place to live? Far from it. It became a worse place to live, but real estate became more expensive.

Inelasticity and tight supply explain the “how” of sudden price run-ups. As I’ve alluded, there are three explanations for increases in urban real estate values applying to locations such as New York and Silicon Valley:

1. Income hypothesis: prices are increasing because incomes have improved. Analysis: not sufficient. Over the past 20 years, rents and prices in the leading cities have gone up much faster than income has increased.

2. Value hypothesis: prices are increasing because the value of the real estate (independent of local income changes, discussed above) has gone up. Analysis: very unlikely. Value hypotheses can explain very local variations (new amenities, changes in views, transportation infrastructure, school district changes) but they very rarely apply to large cities– except in the context of a city’s job market (which is a subcase of the income hypothesis). Value hypotheses do not explain the broad-based increase in San Francisco or New York real estate costs.

3. Calamity hypothesis: something bad is happening that is causing the scarcity and thereby driving up the price. Analysis: this is the only remaining hypothesis. So what is happening?

Examining Manhattan, there are several factors that constrict supply. There’s regulatory corruption pushed through by entrenched owners who have nothing better to do than to fight new development (who therefore keep some of the most desirable neighborhoods capped at a ridiculous five-story height limit). Government irresponsibly allows foreign speculators into the market, when New York is long past needing a “six months and one day” law (that makes it illegal or own substantial property without using it for at least 50.1% of the year, thus preventing Third World despots, useless princes, and oil barons from driving up prices). Then there’s the seven-decade old legacy “rent control” system that ties about 1 percent of apartments up (in the context of price inelasticity, a 1-percent supply compromise is a big deal) while allowing people for whom the program was not intended (most present-day rent-control beneficiaries are well-connected, upper-middle-class people, not the working-class New Yorkers the program was intended for) to remain locked in to an arrangement strictly superior than ownership (1947 rents are a lot lower than 2012 maintenance fees). Finally, there are the parentally-funded douchebags working in downright silly unpaid internships while their parents drop $5,000 per month on their spoiled, underachieving kids’ “right” to the “New York experience”. All of these influences are socially negative and compromise supply, and thereby drive up the price of living in New York, but I don’t think any of these represent the main calamity.

My inelasticity example illustrated a case where a city can become a less desirable place to live (due to a catastrophe that destroys housing) but experience a massive increase in aggregate market value of its real estate, and I’ve shown several examples of this dynamic in the context of New York real estate. Trust-fund hipster losers and alimony socialites who’ve gamed their way into rent control don’t make the city better, but they drive up rents. In these cases and in my model example, value was strictly destroyed, but prices rose and “market value” increased. History shows us that this is the norm with real estate problems. Catastrophes reduce real estate prices if they have income effects, but a catastrophe that leaves the local job market (i.e. the sources of income) intact and doesn’t inflict a broad-based value reduction will increase the market price of real estate, both for renters and buyers. One recent example of this is the what Manhattan’s real estate industry knows as “The 9/11 Boom” in the 2000s. The short-term effect of the terrorist attack reduced real estate prices slightly, but its long-term effect was to increase them substantially over the following years. The intrinsic value of living in Manhattan decreased in the wake of the attack, but speculators (especially outside of New York and the U.S.) anticipated future “supply destruction” (read: more attacks) and bought in order to take advantage of such an occasion should it arise.

However, the post-9/11 real estate boom is over in New York, and wouldn’t affect Silicon Valley, Seattle, or Los Angeles at all. There has to be another cause of this thing. So what is it?

The real cause.

I’m afraid that the answer is both obvious and deeply depressing. Prices are skyrocketing in a handful of “star cities” because the rest of the country is dying. It’s becoming increasingly difficult, if not impossible, to maintain a reliable, middle-class income outside of a small handful of locations. I’ve heard people describe Michigan’s plight as “a one-state recession that has gone on for 30 years”. That “recession” has spilled beyond one region and is now the norm in the United States outside of the star cities.

As I’ve implied in previous posts, the 1930s Depression was caused by (among other things) a drop in agricultural commodity prices in the 1920s, which led to rural poverty. History tends to remember the 1920s as a prosperous boom decade, which it was in the cities. In rural America, it was a time of deprivation and misery. The poverty spread. This was a case of technological advancement (a good thing, but one that requires caution) in food production leading to price drops, poverty of increasing scope, and, eventually, a depression so severe that it became worldwide, encouraged opportunistic totalitarianism, and required government intervention in order to escape it.

What happened to food prices in the 1920s is starting to happen to almost all human labor. People whose labor can be commoditized are losing big, which means that capital (including the social and cultural varieties) is king. This shrinks the middle class, increases the importance of social connections, and admits certain varieties of extortion. One of these is the increasing tuition in educational programs, admissible on account of the connections these provide. Another is the high cost of housing in geographical regions where the dwindling supply of middle-class jobs is concentrated, and it’s the same dynamic: as the middle class gets smaller, anything that grants a chance at a connection to a “safe” person, company, or region becomes dramatically more expensive.

Incomes and job availability are getting better in the star cities (in contrast against the continuing recession experienced by the rest of the country) but the improvement is more than wiped out by the increasing costs of housing. The star cities are getting richer and more taxed (largely by real estate markup) at the same time. Outside of the star cities, real estate costs have stopped rising (and have fallen in many places) but the local economies have done worse, canceling out the benefits. Landlords have successfully run a “heads, I win; tails, you lose” racket against the rest of the world. When their locales do well, they get the excess income that should by rights go to the workers. When their locations do poorly, they lose, but not as much as the working people (who lose their jobs as opposed to mere resale value). This has always been the case; landowners have always had that power (and used it). What has changed is that the disparity between “star cities” and the rest of the country has become vast, and it seems to be accelerating.

Rents and property prices in New York and Silicon Valley aren’t going up because those locales are becoming more desirable. That’s not happening. Since high housing costs lead to environmental degradation, traffic problems, crime and cultural decline, it’s quite likely that the reverse is true. Rather, they’re going up because the rest of the country is getting worse– so much worse that a bunch of former-middle-class migrants have to crowd their way in, making enormous sacrifices in doing so, because good jobs anywhere else are becoming scarce. These migrations (caused by the collapse of desirability in other locations) increase demand for housing and allow rents and prices to rise. That is why the rent is too damn high.

The good news is that there is a visible (if not easy to implement) solution: revitalize locations outside of the star cities. Small business formation (without personal liability, which defeats the purpose of the limited liability corporation– a controversial but necessary innovation) needs to be made viable outside of the star cities. Right now, growth companies are only feasible in a small set of locations, because starting the most innovative businesses will require a rich client (New York, Los Angeles) or a venture capitalist (Silicon Valley) but, as innovative and more progressive mechanisms for funding (such as Kickstarter) emerge, that may change. I hope it does. This extreme geographic concentration of wealth and innovation is far from a desirable or even sensible arrangement, and it makes our rents too damn high. We shouldn’t be satisfied with one Silicon Valley, for example. We should strive to have ten of them.

The 3-ladder system of social class in the U.S.

Typical depictions of social class in the United States posit a linear, ordered hierarchy. I’ve actually come to the conclusion that there are 3 distinct ladders, with approximately four social classes on each. Additionally, there is an underclass of people not connected to any of the ladders, creating an unlucky 13th social class. I’ll attempt to explain how this three-ladder system works, what it means, and also why it is a source of conflict. The ladders I will assign the names Labor, Gentry, and Elite. My specific percentage estimates of each category are not derived from anything other than estimation based on what I’ve seen, and my limited understanding of the macroeconomics of income in the United States, so don’t take them for more than an approximation. I’ll assess the social role of each of these classes in order, from bottom to top.

This is, one should note, an exposition of social class rather than income. Therefore, in many cases, precise income criteria cannot be defined, because there’s so much more involved. Class is more sociological in nature than wealth or income, and much harder to change. People can improve their incomes dramatically, but it’s rare for a person to move more than one or two rungs in a lifetime. Social class determines how a person is perceived, that person’s access to information, and what opportunities will be available to a person.

Underclass (10%). The underclass are not just poor, because there are poor people on the Labor ladder and a few (usually transiently or voluntarily) on the Gentry ladder who are poor. In fact, most poor Americans are not members of the Underclass. People in the Underclass are generationally poor. Some have never held jobs. Some are third-generation jobless, even. Each of these ladders (Labor, Gentry, Elite) can be seen as an infrastructure based, in part, on social connections. There are some people who are not connected to any of these infrastructures, and they are the underclass.

The Labor Ladder (65%). This represents “blue-collar” work and is often associated with “working class”, but some people in this distinction earn solidly “middle-class” incomes over $100,000 per year. What defines the Labor ladder is that the work is seen as a commodity, and that there’s rarely a focus on long-term career management. People are assessed based on how hard they work because, in this world, the way to become richer is to work more (not necessarily more efficiently or “smarter”). The Labor ladder is organized almost completely based on income; the more you make (age-adjusted) the higher your position is, and the more likely it is that your work is respected.

Secondary Labor (L4, 30%) is what we call the “working poor”. These are people who earn 1 to 3 times the minimum wage and often have no health benefits. Many work two “part time” jobs at 35 hours per week (so their firms don’t have to provide benefits) with irregular hours. They have few skills and no leverage, so they tend to end up in the worst jobs, and those jobs enervate them so much that it becomes impossible for them to get the skills that would help them advance. The reason for the name Secondary in this class is that they are trapped in the “secondary” labor market: jobs originally intended for teenagers and well-off retirees that were never intended to pay a living wage. Wages for this category are usually quoted hourly and between $5 and $15 per hour.

Primary Labor (L3, 20%) is what we tend to associate with “blue-collar” America. If by “average” we mean median, this is the average social class of Americans, although most people would call it working class, not middle. It usually means having enough money to afford an occasional vacation and a couple restaurant meals per month. People in the L3 class aren’t worried about having food to eat, but they aren’t very comfortable either, and an ill-timed layoff can be catastrophic. If the market for their skills collapses, they can end up falling down a rung into L4. When you’re in the Labor category, market forces can propel you up or down, and the market value of “commodity” labor has been going down for a long time. Typical L3 compensation is $20,000 to $60,000 per year.

In the supposed “golden age” of the United States (the 1950s) a lot of people were earning L2 compensation for L3 work. In a time when well-paid but monotonous labor was not considered such a bad thing (to people coming off the Great Depression and World War II, stable but boring jobs were a godsend) this was seen as desirable, but we can’t go back to that, and most people wouldn’t want to. Most Millennials would be bored shitless by the jobs available in that era that our society occasionally mourns losing.

High-skill Labor (L2, 14%) entails having enough income and job security to be legitimately “middle class”. People in this range can attend college courses, travel internationally (but not very often) and send their children to good schools. Plumbers, airline pilots, and electricians are in this category, and some of these people make over $100,000 per year. For them, there must be some barrier to entry into their line of work, or some force keeping pay high (such as unionization). Within the culture of the Labor ladder, these people are regarded highly.

Labor Leadership (L1, 1%) is the top of the Labor ladder, and it’s what blue-collar America tends to associate with success. (The reason they fail to hate “the 1%” is that they think of L1 small business owners, rather than blue-blooded parasites, as “rich people”.) These are people who, often through years of very hard work and by displaying leadership capability, have ascended to an upper-middle-class income. They aren’t usually “managers” (store managers are L2) but small business owners and landlords, while they’re often seen doing the grunt work of their businesses (such as by running the register when all the cashiers call in sick). They can generate passive income from endeavors like restaurant franchises and earn a solidly upper-middle income standing, but culturally they are still part of Labor. This suits them well, because where they excel is at leading people who are in the Labor category.

The Gentry Ladder (23.5%). England had a landed gentry for a while. We have an educated one. Labor defines status based on the market value of one’s commodity work. The Gentry rebels against commoditization with a focus on qualities that might be, from an extensional perspective, irrelevant. They dislike conflict diamonds, like fair-trade coffee, and drive cultural trends. In the 1950s, they were all about suburbia. In 2012, they had the same enthusiasm for returning to the cities. They value themselves not based on their incomes but, much more so, on access to respected institutions: elite universities, leading technology companies, museums and artistic endeavors. Labor aspires to occupational success and organizational leadership, while the Gentry aspires to education and cultural leadership.

Before going further, it’s worth noting that the typical socioeconomic ordering would have each Gentry level two levels above the corresponding Labor level in social standing. Thus, G1 > G2 > (G3 ~= L1) > (G4 ~= L2) > L3 > L4.

Transitional Gentry (G4, 5%) is the lowest rung of the Gentry ladder. Typically, I think of community colleges when trying to explain G4. It’s the class of people who are coming into the Gentry, usually from L2, and most people in it are looking to attain G3 (and many do). Since the Gentry is defined by education, culture, and cultural influence, earning a four-year degree (which about 20% of American adults have) will usually put a person solidly into G3.

Mobility between G4 and L2 is common, and G4 is a “young people” class, because people who don’t keep abreast of politics, current events, and at least the ”upper-middle-brow” culture of shows like Mad Men [0] tend to return to L2 (which is not an inferior class, but an approximately-equal one with different values). Those who keep up tend to progress to G3.

[0] A couple of people have emailed me to ask why I “knocked” Mad Men. That wasn’t my intention. It’s an excellent show. “Upper-middle-brow” is not panning. I’m lower-middle-brow on a good day.

Primary Gentry (G3, 16%) is what Americans think of as the cultural “upper-middle class”. They have four-year college degrees and typically have professional jobs of middling autonomy and above-average income, but usually not leadership positions. Incomes in this class vary widely (in part, because the Gentry is not defined by income) but generally fall between $30,000 and $200,000 per year. People in this class tend to be viewed as taste-setters by Labor but are viewed as gauche by the higher-ranking G1 and G2 classes.

High Gentry (G2, 2.45%) tend to come from elite colleges and traditionally gravitated toward “junior executive” roles in medium-sized companies, innovative startups, management consultancies, and possibly investment banking (which facilitates the G2-E4 transition). But G2′s wouldn’t be caught dead in jobs that seem perfectly fine to G3′s, which they view (often rightly) to be dead ends. Having interesting, respected work is important to G2′s. To a G2, being a college professor, scientist, entrepreneur, or writer are desirable jobs. Creative control of work is important to G2′s, although not all are able to get it (because creative jobs are so rare). David Brooks’s Bobos in Paradise captured well the culture of G2′s in that time. Members of this social class aggressively manage their careers to get the most out (in terms of intellectual and financial reward) of their careers, but what they really want is enough success and money to do what they really value, which is to influence culture.

G2 is my native social class, and probably that of most of my readers.

Cultural Influencers (G1, 0.05%) are the pinnacle of the Gentry. Jon Stewart is a classic example. He probably makes a “merely” upper-middle-class income working for the notoriously cheap Comedy Central, but he has the most well-regarded members of the intelligentsia on his show every night. For G1, I’m not talking about “celebrities”. Celebrities are a bizarre and tiny category that mixes all three ladders (I’d argue that they’re the upper tier of L1; most lack the power of Elites and the refinement of the Gentry). Rather, I’m talking about people who are widely recognized as smart, knowledgeable, creative, and above all, interesting. They tend also to have access to other interesting people. G1′s are not “famous” in the celebrity sense, and most of them aren’t that rich. I’d guess that their incomes vary mostly from $100,000 to $1 million per year, which is low for a social class that is so difficult to enter (much harder than E4, and possibly E3, to get into).

It’s quite likely that G1 is expanding, and it was probably much smaller in the past. The internet is allowing more people to become well-known and have some degree of cultural influence. Many bloggers have entered G1 without relying on established institutions such as publishers or universities (which used to be the only way). That said, G1 requires reliability in attention; people having their 15 minutes don’t count.

The Elite Ladder (1.5%). This is an infrastructure “at the top of society”, but many of the people it includes are in many ways nowhere near the top. People complain about “the 1 percent”, but the reality is that most of that top 1.0% are nowhere near controlling positions within society.

Not all of the Elite are in the top 1% for income, but most will have the opportunity to be. The Elite includes everyone from billionaires to out-of-college investment banking analysts (who earn a middle-class income in one of the most expensive cities on the planet). What they have in common is that they are exerting themselves toward ownershipLabor provides the work and values effort and loyalty. The Gentry provides culture and it values education and creativity. The Elite owns things and values control and establishment.

As with the Gentry and Labor, when comparing these ladders, one should consider an Elite rung to be two levels above the corresponding Gentry rung, so in terms of social standing, E1 > E2 > (E3 ~= G1) > (E4 ~= G2) > G3 > G4.

The Strivers (E4, 0.5%) are another transitional class that is generally for young people only. They aren’t actually Elite, but they might, if lucky, move into E3. Junior investment bankers, law firm associates, and young startup entrepreneurs are in this category. They’re trying to “break in” to something rich and successful. If they get in, they’ll become solid E3. If they fail in doing so, they usually return to G2: upper-middle-class professionals not strongly bound to the Elite infrastructure. G2 is usually a happier place than E4, but E3′s and E4′s tend to deride this transition. In startups, a business move favoring this step (toward G1-2 upper-middle-class stability) is derided as a “lifestyle business”.

Elite Servants (E3, 0.8%) are the law-firm partners and senior investment bankers and corporate executives that might be called the “working rich” and they comprise what was once called the “white-shoe” culture. They’re quite well-off, as far as servants go, often earning incomes from $200,000 to $5 million per year, but their social standing is conditional. They serve the rich, and the rich have to keep finding them useful for them to maintain their place. It’s not an enviable place to be, because the social expectations associated with maintaining E3 status require high spending, and even the extremely well-compensated ($1 million per year and up) E3′s rarely have the savings to survive more than a year or two without a job, because of the need to maintain connections. E3′s tend to have as many money problems as people in the lower social classes. E3′s also suffer because they live in a “small world” society driven by reputation, long-standing grudges and often petty contempt. E3′s still get fired– a lot, because the pretense that justifies E3-level status (of a large-company “executive”) requires leadership and many don’t have it– and when it happens to them, they can face years during which they can’t find appropriate employment.

People tend to think of face leaders (politicians and CEOs) as belonging to a higher social class, but most are E3. If they were higher, they wouldn’t have to work so hard to be rich. Examining our most recent presidents, Barack Obama is G1, the George Bushes were E2, Bill Clinton was E3, and Reagan was in the celebrity category that is a hybrid of E3 and L1. John Kennedy was E2, while Lyndon Johnson was L1. Most CEOs, however, are strictly E3, because CEOs are “rubber gloves” that are used for dirty work and thrown aside if they get too filthy. There’s too much reputation risk involved in being a corporate CEO for an E2 to want the job under most circumstances.

National Elite (E2, 0.19%) are what most Americans think of as “upper class” or “old money”. They have Roman Numerals in their names, live in the Hamptons (although they’ve probably stopped using “summer” as a verb now that “the poors” know about it) and their families have attended Ivy League colleges for generations. They’re socially very well connected and have the choice not to work, or the choice to work in a wide array of well-compensated and highly-regarded jobs. Rarely do they work full time under traditional employment terms– never as subordinates, sometimes as executives in an apprentice role, often in board positions or “advisory” roles. It’s uncommon that an E2 will put a full-time effort into anything, because their objective with work is to associate their names with successful institutions, but not to get too involved.

To maintain E2 status, being wealthy is required. It takes about $500,000 per year, after tax, in income at a minimum. However, it’s not hard for a person with E2 status and connections to acquire this, even if the family money is lost. The jobs that E3′s regard as the pinnacle of professional achievement (the idea that such a notion as “professional achievement” exists is laughable to them; paid full-time work is dishonorable from an E2 perspective) are their safety careers.

Global Elite (E1, ~60,000 people worldwide, about 30% of those in the U.S.) are a global social class, and extremely powerful in a trans-national way. These are the very rich, powerful, and deeply uncultured barbarians from all over the world who start wars in the Middle East for sport, make asses of themselves in American casinos, rape ski bunnies at Davos, and run the world. Like the Persian army in 300, they come from all over the place; they’re the ugliest and most broken of each nation. They’re the corporate billionaires and drug kingpins and third-world despots and real estate magnates. They’re not into the genteel, reserved “WASP culture” of E2′s, the corporate earnestness and “white shoe” professionalism of E3′s, or the hypertrophic intellectualism and creativity of G1′s and G2′s. They are all about control, and on a global scale. To channel Heisenberg, they’re in the empire business. They aren’t mere management or even “executives”. They’re owners. They don’t care what they own, or what direction the world takes, as long as they’re on top. They almost never take official executive positions within large companies, but they make a lot of the decisions behind the scenes.

Unlike the National Elite, who tend toward a cultural conservatism and a desire to preserve certain traits that they consider necessary to national integrity, the Global Elite doesn’t give a shit about any particular country. They’re fully multinational and view all the world’s political nations as entities to be exploited (like everything else). They foster corruption and crime if it serves their interests, and those interests are often ugly. Like Kefka from Final Fantasy VI, their reason for living is to create monuments to nonexistence.

For the other social classes, there’s no uniform moral assumption that can apply. G1′s are likable and often deserving cultural leaders, but sometimes foolish, overrated, incompetent, infuriatingly petty, and too prone to groupthink to deserve their disproportionate clout. G2′s tend to have the best (or at least most robust) taste, because they don’t fall into G1 self-referentiality, but can be just as snooty and cliquish. As “pro-Gentry” as I may seem, it’s a massive simplification to treat that set as entirely virtuous. Likewise, the lower elite ranks (E2, E3, E4) also have their mix of good and bad people. There are E2′s who want to live well and decently, E3′s trying to provide for their families, and E4′s trying to get in because they were brought up to climb the ladder. On the other hand, E1 is pretty much objectively evil, without exceptions. There are decent people who are billionaires, so there’s no income or wealth level at which 100% objective evil becomes the norm. But if you climb the social ladder, you get to a level at which it’s all cancer, all the way up. That’s E1. Why is it this way? Because the top end of the world’s elite is a social elite, not an economic one, and you don’t get deep into an elevated social elite unless you are very simliar to the center of that cluster, and for the past 10,000 years the center of humanity’s top-of-the-top cluster has always been deep, featureless evil: people who burn peasants’ faces off because it amuses them. Whether you’re talking about a real person like Hitler, Stalin, Erik Prince, Osama bin Laden, or Kissinger, or a fictional example like The Joker, Kefka, Walter White, or Randall Flagg; when you get to the top of society, it’s always the same guy. Call it The Devil, but what’s scary is that it needs (and has) no supernatural powers; it’s human, and while one its representatives might get knocked off, another one will step up.

Ladder conflict. What does all this mean? How do these ladders interrelate? Do these three separate social class structures often find themselves at odds and fight? Can people be part of more than one?

What I’ve called the Labor, Gentry, and Elite “ladders” can more easily be described as “infrastructures”. For Labor, this infrastructure is largely physical and the relevant connection is knowing how to use that physical device or space, and getting people to trust a person to competently use (without owning, because that’s out of the question for most) these resources. For the Gentry, it’s an “invisible graph” of knowledge and education and “interestingness”, comprised largely of ideas. For the Elite, it’s a tight, exclusive network centered on social connections, power, and dominance. People can be connected to more than one of these infrastructures, but people usually bind more tightly to the one of higher status, except when at the transitional ranks (G4 and E4) which tend to punt people who don’t ascend after some time. The overwhelmingly high likelihood is that a person is aligned most strongly to one and only one of these structures. The values are too conflicting for a person not to pick one horse or the other.

I’ve argued that the ladders connect at a two-rung difference, with L2 ~ G4, L1 ~ G3, G2 ~ E4, and G1 ~ E3. These are “social equivalencies” that don’t involve a change in social status, so they’re the easiest to transitions to make (in both directions). They represent a transfer from one form of capital to another. A skilled laborer (L2) who begins taking night courses (G4) is using time to get an education rather than more money. Likewise, one who moves from the high gentry (G2) to a 90-hour-per-week job in private wealth management (E4) is applying her refined intellectual skills and knowledge to serving the rich, in the hope of making the connections to become one of them.

That said, these ladders often come into conflict. The most relevant one to most of my readers will be the conflict between the Gentry and the Elite. The Gentry tends to be left-libertarian and values creativity, individual autonomy, and free expression. The Elite tends toward center-right authoritarianism and corporate conformity, and it views creativity as dangerous (except when applied to hiding financial risks or justifying illegal wars). The Gentry believes that it is the deserving elite and the face of the future, and that it can use culture to engineer a future in which its values are elite; while the upper tier of the Elite finds the Gentry pretentious, repugnant, self-indulgent, and subversive. The relationship between the Gentry and Elite is incredibly contentious. It’s a cosmic, ubiquitous war between the past and the future.

Between the Gentry and Labor, there is an attitude of distrust. The Elite has been running a divide-and-conquer strategy between these two categories for decades. This works because the Elite understands (and can ape) the culture of the Gentry, but has something in common with Labor that sets the categories apart from the Gentry: a conception of work as a theater for masculine dominance. This is something that the Elite and Labor both believe in– the visceral strength and importance of the alpha-male in high-stakes gambling settings such as most modern work– but that the Gentry would rather deny. Gender is a major part of the Elite’s strategy in turning Labor against the Gentry: make the Gentry look effeminate. That’s why “feminist” is practically a racial slur, despite the world desperately needing attention to women’s political equality, health and well-being (that is, feminism).

The Elite also uses the Underclass in a different process: the Elite wants Labor think the Gentry intends to conspire with the Underclass to dismantle Labor values and elevate these “obviously undeserving” people to, at least, the status of Labor if not promoted above them. They exploit fear in Labor. One might invoke racism and the “Southern strategy” in politics as an example of this, but the racial part is incidental. The Elite don’t care whether it’s blacks or Latinos or “illigals” or red-haired people or homosexuals (most of whom are not part of the Underclass) that are being used to frighten Labor into opposing and disliking the Gentry; they just know that the device works and that it has pretty much always worked.

The relationship between the Gentry and Elite is one of open rivalry, and that between the Gentry and Labor is one of distrust. What about Labor and the Elite? That one is not symmetric. The Elite exploit and despise Labor as a class comprised mostly of “useful idiots”. How does Labor see the Elite? They don’t. The Elite has managed to convince Labor that the Gentry (who are open about their cultural elitism, while the Elite hides its social and economic elitism) is the actual “liberal elite” responsible for Labor’s misery over the past 30 years. In effect, the Elite has constructed an “infinity pool” where the Elite appears to be a hyper-successful extension of Labor, lumping these two disparate ladders into an “us” and placing the Gentry and Underclass into “them”.

Analysis of current conflict.

Despite its upper ranks being filled by people who are effectively thugs, the Elite isn’t entirely evil. By population, most of them are merely E3 and E4 stewards with minimal decision-making power, and a lot of those come from (and return to) the Gentry and maintain those values. On the other hand, Elite values tend to be undesirable, because at that structure’s pinnacle are the E1 crime bosses. There are good people within the Elite, even though the Elite itself is not good.

For virtue, the Gentry does better. I don’t want to fall into the American fallacy of conflating “middle class” with virtue, and there are some awful and good people in all social classes, but I think that the Gentry is a more inclusive and reflective elite– one of ideas and values, not based on exclusivity.

One Gentry stronghold for a long time has been high technology, a meritocracy where skill, know-how, and drive enabled a person to rise to technical leadership of increasing scope and eventually business leadership and entrepreneurship. This created the engineering culture of Hewlett-Packard (before Fiorina) and the “Don’t Be Evil” mantra of Google. This is Gentry culture asserting itself. Be ethical, seek positive-sum outcomes, and win by being great rather than by harming, robbing, or intimidating others. It’s not how business is practiced in most of the world– zero-sum thuggery is a lot more common– but it’s how great businesses are made. This weird world in which self-made success was regarded higher than entrenchment, symbolized in Silicon Valley, enabled people from the Gentry to become very rich and Gentry ideas to establish lasting success in business.

What has made America great, especially from 1933 until now, has been the self-assertion of the Gentry following the defeat of the Elite. The first half of the American Era (1933 to 1973) utterly emasculated the Elite. Their rapacious greed and world-fucking parasitism was repaid with 90-percent tax rates, and they told to consider themselves lucky that it wasn’t full-on socialism (or a violent revolution in which they all died, Paris-1793-style). The so-called “WASP culture” of the E2 class derives many of its norms from the paranoia of that period (when the global elite was very small, and they were the “robber baron” elite). For example, the demand that a house not be visible from the road comes from a time in which that was physically dangerous. This four-decade curtailment of the American Elite, and the more resounding destruction of the European ones, was one of the best things that ever happened to the world. It made the golden age of Silicon Valley possible.

There are a lot of reasons why this “golden age” of a disempowered Elite was able to occur, but World War II was the biggest of all of them. Future historians will probably regard the two World Wars as one monstrous conflict, with a period of crippling, worldwide economic depression between them. Few disagree with the claim, for example, that the resolution of the First World War led inexorably to the evils of totalitarianism and the Second of these wars. This giant and largely senseless conflict’s causes seem complex– historians are still debating World War I’s inception– but the short version is that the world’s Elites did that. There was a 30-year period of war, famine, poverty, racial pogroms, and misery that existed largely because a network of high-level obligations and horrendous ideas (especially the racism used to justify colonialism, which benefitted the rich of these societies enormously, but sent the poor to die in unjust wars, contract awful diseases for which they had no immunity, and commit atrocities) set the conditions up. After about a hundred million deaths and thirty tears of war, societies finally decided, “No More”. They dismantled their Elites vigorously, North American and European nations included. This became the “golden age” of the educated Gentry. In the U.S. (for which the 1950s were a decade of prosperity; in Europe, it was a period of rebuilding and not very prosperous) it was also the “golden age of the middle class”.

However, the Elite has brought itself back to life. This Gilded Age isn’t as bad as the last one, but it’s heading that way. It started in the late 1970s when the U.S. fell in love again with elitism: Studio 54, cocaine– a drug that captures the personality of that cultural change well, because its effect is to flood the brain with dopamine, causing extreme arrogance– and “trickle-down economics”.

Assessing the present state of conflict requires attention to what each party wants. What does the Gentry want? The Gentry has a strange, love-hate relationship with capitalism. Corporations are detested (even more than they deserve) by this class and most people in the Gentry want the U.S. to look more like Europe: universal healthcare, decent vacation allotments, and cheap, ecologically sound high-speed trains. This might give the impression of a socialist bent, and that impression’s not wrong. Yet their favorite places are New York (the center of capitalism) and Silicon Valley (also fiercely capitalistic). Although left-leaning, the Gentry are strong champions for non-corporate capitalism. There is no contradiction here. European social democracies have also managed to create hybrid systems that combine the safety and infrastructure of socialism with the innovation and individual liberty of capitalism: the best of both worlds.

For a contrast, what the Elite has been pushing for is the worst of both worlds, at least for average people. The truth of corporate “capitalism” is that it provides the best of both systems (socialism and capitalism) for the Elite and the worst of both for everyone else. It’s a welfare state in which only very well-connected people are citizens, it favors command economies (which are what most corporations are, internally) and it stifles the positive-sum innovation that is capitalism’s saving grace. The upper tier of society wants social stability for themselves (to stay in and keep others out) but they favor extreme economic variability (also known as “inequality”) because it gives them more opportunities to exploit their social status for economic gain (read: private-sector corruption).

Air travel in the contemporary U.S. is an illustrative example of this “worst of both worlds” scenario: the pricing is erratic, unreasonable, and even a bit mean-spirited, which shows the volatility of capitalism, while the low quality of service and the abysmal morale of the industry feel like direct transplants from the Soviet Union.

The future.

A major battle is coming, with all three of these categories (Labor, Gentry, and Elite) involved. The Gentry and the Elite are at fundamental opposites on the type of society they want to see and, for decades, the Elite has been winning, but their victories are becoming harder to win as technology opens up the world. Labor might seem like a nonparticipant in the ideological battles, but they comprise most of the casualties, and they’ve seen shells land in their backyard (especially if they live in Detroit). Not only are they losing their jobs and social status, but their communities have been demolished.

Something else is happening, which is relevant both in a macrohistorical sense and to the U.S. in 2012. One way to divide human history is into three eras: pre-Malthusian, trans-Malthusian, and post-Malthusian. I refer, of course, to the prediction of Thomas Malthus, early in the Industrial Revolution, that population growth in contemporary societies would lead to a catastrophe because population grew exponentially, while economic growth was linear. He was wrong. Economic growth has always been exponential, but for most of human history it has had a very slow (under 1% per year) exponential curve– slower than population growth, and slow enough to look linear. His mathematical model was wrong, but his conclusion– that population grows until it is checked (i.e. people die) by disease, famine, and war– was true in nature and of almost every human society from the dawn of time to about 1800. He was wrong that it would afflict England and the richer European countries in the mid-19th century– because the Industrial Revolution accelerated economic growth enough to prevent a global Malthusian crunch. On the other hand, there were local Malthusian catastrophes. Ireland endured severe poverty and oppression, colonialism was deeply horrible and did, in fact, represent many of the vices Malthus warned about.

The world was pre-Malthusian when societies were doomed to grow faster in population than in their ability to support it. This led, over the millennia, to certain assumptions about society that can be categorized as “zero-sum”. For one tribe to take care of its young, another tribe must lose wealth or be destroyed. For English to be rich, Irish must starve. For Southern whites to live well, blacks must be slaves. For capital to be profitable, labor must be exploited. If Catholic Spain has one colony, Protestant England must have more. For the German people to have “lebensraum”, Central European countries must be invaded and their inhabitants killed. “Medieval” horrors were an artifact of the Malthusian reality of that time, but such atrocities continued even as the long-standing Malthusian inequality (population growth being greater than economic growth) reversed itself.

We are now in a trans-Malthusian state, and have been for about two hundred years. Global economic growth is now over 4% per year, which is the fastest it has ever been, and there’s no sign of it slowing down. The world has a lot of problems, and there are pockets of severe decay, corruption, and poverty; but on the whole, it’s becoming a better place, and at an accelerating (hyper-exponential) rate. The world is no longer intrinsically Malthusian, but pre-Malthusian attitudes still dominate, especially at the pinnacles of our most successful societies. This shouldn’t be shocking, because the very traits (especially, low empathy and greed) that would be required to succeed in a zero-sum world are still strong in our upper classes. This legacy won’t go away overnight. The people haven’t changed very much. Pre-Malthusian fearmongering is also very effective on less intelligent people, who haven’t figured out that the world has changed in the past two hundred years. They still believe in the zero-sum world wherein, if “illigal” immigrants “take all the jobs”, middle-class white people will starve.

The trans-Malthusian state is, I believe, intrinsically more volatile than a pre-Malthusian one. Technology is causing the job market to change faster, but this paradoxically makes individual spells of unemployment longer. Another thing is that we’re seeing something that pre-Malthusian economies didn’t have to worry about: economic depressions. This is not to romanticize pre-Malthusian life or societies. They would experience famines, wars, and disease epidemics that would kill far more people than any economic depression, but those had natural or historical causes that were not intrinsic and desirable. We’ve been able to eliminate most of these evils from life without losing anything in the process. These depressions, in my view, come from economic progress itself (and moreover, our inability to manage growth in a way that distributes prosperity, rather than displacing people). The first quarter of the 20th century saw unprecedented advancement in food production– a good thing, undeniably– which caused agricultural commodities to drop in price. This caused small farmers (who could not partake in these advances to the same extent) to fall into poverty. Without the small farmers, towns supported by them weren’t doing well either. Poverty isn’t a “moral medicine” that clears out the bad in society. It doesn’t make people better or harder working. It ruins people. It’s a cancer. It spreads. And it did. Rural poverty was severe in the United States by 1925, before the Depression officially began. Urban sophisticates and elites were OK in 1925, hence this era is remembered as being prosperous. In 1933? Not so much. The cancer had grown. Throughout the 1930s, the rich were terrified of an American communist revolution.

We don’t want another Great Depression, and what’s scary in 2012 is that it seems like what happened to agricultural products in the 1920s is now happening to almost all human labor. We’re outsourcing, automating, and “streamlining”, and all of these changes are fundamentally good, but if we don’t take steps to prevent the collapse of the middle class, we could lose our country. This will almost certainly require innovations that the right wing will decry as “socialism”, but it will also involve techniques (such as crowd-funding and microloans for small businesses) that are far more capitalistic than anything the corporates have come up with.

We are trans– (not post-) Malthusian because we live in a world where scarcity is still in force (although often artificial) and zero-sum mentalities dominate (even though they’re inappropriate to a technological world). If Mexican immigrants “take the jobs”, formerly middle-class white people will be without healthcare. What’s required is to step away from the zero-sum attitude (expressed often in racism) and recognize that no one of any ethnicity, jobless or employed, should be without healthcare. Ever. Technology is great at helping us generate more resources and make more with what we have, and we have to accept that it will “unemploy” people on a regular basis, but the bounty should be distributed fairly, and not hogged by the fortunate while those it renders transiently jobless are allowed to fall into poverty. “Collateral damage” is not acceptable and, if the 1920s and ’30s are illustrative, it can’t be contained. The damage will spread.

What does this have to do with the ladders and their conflict? Labor is a trans-Malthusian social category because it lives in a world that values fair play (a positive-sum, post-Malthusian value) but that is constrained by artificial scarcity. The Elite is pre-Malthusian; they are obsessed with the zero-sum game of social status and the need to keep themselves elevated and others out. The Gentry, although not without its faults, is properly post-Malthusian. Their values (political liberalism, individual freedom, enough socialism to ensure a just society, positive-sum outlook, and a positive view of technology) represent what it will take to evolve toward a post-Malthusian state.

Tech companies: open allocation is your only real option.

I wrote, about a month ago, about Valve’s policy of allowing employees to transfer freely within the company, symbolized by placing wheels under the desk (thereby creating a physical marker of their superior corporate culture that makes traditional tech perks look like toys) and expecting employees to self-organize. I’ve taken to calling this seemingly radical notion open allocation– employees have free rein to work on projects as they choose, without asking for permission or formal allocation– and I’m convinced that, despite seeming radical, open allocation is the only thing that actually works in software. There’s one exception. Some degree of closed allocation is probably necessary in the financial industry because of information barriers (mandated by regulators) and this might be why getting the best people to stay in finance is so expensive. It costs that much to keep good people in a company where open allocation isn’t the norm, and where the workflow is so explicitly directed and constrained by the “P&L” and by justifiable risk aversion. If you can afford to give engineers 20 to 40 percent raises every year and thereby compete with high-frequency-trading (HFT) hedge funds, you might be able to retain talent under closed allocation. If not, read on.

Closed allocation doesn’t work. What do I mean by “doesn’t work”? I mean that, as things currently go in the software industry, most projects fail. Either they don’t deliver any business value, or they deliver too little, or they deliver some value but exert long-term costs as legacy vampires. Most people also dislike their assigned projects and put minimal or even negative productivity into them. Good software is exceedingly rare, and not because software engineers are incompetent, but because when they’re micromanaged, they stop caring. Closed allocation and micromanagement provide an excuse for failure: I was on a shitty project with no upside. I was set up to fail. Open allocation blows that away: a person who has a low impact because he works on bad projects is making bad choices and has only himself to blame.

Closed allocation is the norm in software, and doesn’t necessarily entail micromanagement, but it creates the possibility for it, because of the extreme advantage it gives managers over engineers. An engineer’s power under closed allocation is minimal: his one bit of leverage is to change jobs, and that almost always entails changing companies. In a closed-allocation shop, project importance is determined prima facie by executives long before the first line of code is written, and formalized in magic numbers called “headcount” (even the word is medieval, so I wonder if people piss at the table, at these meetings, in order to show rank) that represent the hiring authority (read: political strength) of various internal factions. The intention of headcount numbers is supposed to be to prevent reckless hiring by the company on the whole, and that’s an important purpose, but their actual effect is to make internal mobility difficult, because most teams would rather save their headcount for possible “dream hires” who might apply from outside in the future, rather than risk a spot on an engineer with an average performance review history (which is what most engineers will have). Headcount bullshit makes it nearly impossible to transfer unless (a) someone likes you on a personal basis, or (b) you have a 90th-percentile performance review history (in which case you don’t need a transfer). Macroscopic hiring policies (limits, and sometimes freezes) are necessary to prevent the company from over-hiring, but internal headcount limits are one of the worst ideas ever. If people want to move, and the leads of those projects deem them qualified, there’s no reason not to allow this. It’s good for the engineers and for the projects that have more motivated people working on them.

When open allocation is in play, projects compete for engineers, and the result is better projects. When closed allocation is in force, engineers compete for projects, and the result is worse engineers. 

When you manage people like children, that’s what they become. Traditional, 20th-century management (so-called “Theory X”) is based on the principle that people are lazy and need to be intimidated into working hard, and that they’re unethical and need to be terrified of the consequences of stealing from the company, with a definition of “stealing” that includes “poaching” clients and talent, education on company time, and putting their career goals over the company’s objectives. In this mentality, the only way to get something decent out of a worker is to scare him by threatening to turn off his income– suddenly and without appeal. Micromanagement and Theory X are what I call the Aztec Syndrome: the belief in many companies that if there isn’t a continual indulgence in sacrifice and suffering, the sun will stop rising.

Psychologists have spent decades trying to answer the question, “Why does work suck?” The answer might be surprising. People aren’t lazy, and they like to work. Most people do not dislike the activity of working, but dislike the subordinate context (and closed allocation is all about subordination). For example, peoples’ minute-by-minute self-reported happiness tends to drop precipitously when they arrive at the office, and rise when they leave it, but it improves once they start actually working. They’re happier not to be at an office, but if they’re in an office, they’re much happier when working than when idle. (That’s why workplace “goofing off” is such a terrible idea; it does nothing for office stress and it lengthens the day.) People like work. It’s part of who we are. What they don’t like, and what enervates them, is the subordinate context and the culturally ingrained intimidation. This suggests the so-called “Theory Y” school of management, which is that people are intrinsically motivated to work hard and do good things, and that management’s role is to remove obstacles.

Closed allocation is all about intimidation: if you don’t have this project, you don’t have a job. Tight headcount policies and lockout periods make internal mobility extraordinarily difficult– much harder than getting hired at another company. The problem is that intimidation doesn’t produce creativity and it erodes peoples’ sense of ethics (when people are under duress, they feel less responsible for what they are doing). It also provides the wrong motivation: the goal becomes to avoid getting fired, rather than to produce excellent work.

Also, if the only way a company can motivate people to do a project is to threaten to turn off a person’s income, that company should really question whether that project’s worth doing at all.

Open allocation is not the same thing as “20% time”, and it isn’t a “free-for-all”. Open allocation does not mean “everyone gets to do what they want”. A better way to represent it is: “Lead, follow, or get out of the way” (and “get out of the way” means “leave the company”). To lead, you have to demonstrate that your product is of value to the business, and convince enough of your colleagues to join your project that it has enough effort behind it to succeed. If your project isn’t interesting and doesn’t have business value, you won’t be able to convince colleagues to bet their careers on it and the project won’t happen. This requires strong interpersonal skills and creativity. Your colleagues decide, voting with their feet, if you’re a leader, not “management”. If you aren’t able to lead, then you follow, until you have the skill and credibility to lead your own project. There should be no shame in following; that’s what most people will have to do, especially when starting out.

“20% time” (or hack days) should be exist as well, but that’s not what I’m talking about. Under open allocation, people are still expected to show that they’ve served the needs of the business during their “80% time”. Productivity standards are still set by the projects, but employees choose which projects (and sets of standards) they want to pursue. Employees unable to meet the standards of one project must find another one. 20% time is more open, because it entails permission to fail. If you want to do a small project with potentially high impact, or to prove that you have the ability to lead by starting a skunk-works project, or volunteer, take courses, or attend conferences on company time, that’s what it’s for. During their “80% time”, people are still expected to lead or follow on a project with some degree of sanction. They can’t just “do whatever they want”.

Four types of projects. The obvious question that open allocation raises is, “Who does the scut work?” The answer is simple: people do it if they will get promoted, formally or informally, for doing it, or if their project directly relies on it. In other words, the important but unpleasant work gets done, by people who volunteer to do it. I want to emphasize “gets done”. Under closed allocation, a lot of the unpleasant stuff never really gets done well, especially if unsexy projects don’t lead to promotions, because people are investing most of their energy into figuring out how to get to better projects. The roaches are swept under the carpet, and people plan their blame strategies months in advance.

If we classify projects into four categories by important vs. unimportant, and interesting vs. unpleasant, we can assess what happens under open allocation. Important and interesting projects are never hard to staff. Unimportant but interesting projects are for 20% time; they might succeed, and become important later, but they aren’t seen as critical until they’re proven to have real business value, so people are allowed to work on them but are strongly encouraged to also find and concentrate on work that’s important to the business. Important but unpleasant projects are rewarded with bonuses, promotions, and the increased credibility accorded to those who do undesirable but critical work. These bonuses should be substantial (six and occasionally even seven figures for critical legacy rescues); if the project is actually important, it’s worth it to actually pay. If it’s not, then don’t spend the money. Unimportant and unpleasant projects, under open allocation, don’t get done. That’s how it should be. This is the class of undesirable, “death march” projects that closed-allocation nurtures (they never go away, because to suggest they aren’t worth doing is an affront to the manager that sponsors them and a career-ending move) but that open allocation eliminates. Under open allocation, people who transfer away from these death marches aren’t “deserters”. It’s management’s fault if, out of a whole company, no one wants to work on the project. Either the project’s not important, or they didn’t provide enough enticement.

Closed allocation is irreducibly political. Compare two meanings of the three-word phrase, “I’m on it”. In an open-allocation shop, “I’m on it” is a promise to complete a task, or at least to try to do it. It means, “I’ve got this.” In a closed-allocation shop, “I’m on it” means “political forces outside of my control require me to work only on this project.”

People complain about the politics at their closed-allocation jobs, but they shouldn’t, because it’s inevitable that politics will eclipse the matter of actually getting work done. It happens every time, like clockwork. The metagame becomes a million times more important than actually sharpening pencils or writing code. If you have closed allocation, you’ll have a political rat’s nest. There’s no way to avoid it. In closed allocation, the stakes of project allocation are so high that people are going to calculate every move based on future mobility. Hence, politics. What tends to happen is that a four-class system emerges, resulting from the four categories of work that I developed above. The most established engineers, who have the autonomy and leverage to demand the best projects, end up in the “interesting and important” category. They get good projects the old-fashioned way: proving that they’re valuable to the company, then threatening to leave if they aren’t reassigned. Engineers who are looking for promotions into managerial roles tend to take on the unpleasant but important work, and attempt to coerce new and captive employees into doing the legwork. The upper-middle class of engineers can take the interesting but unimportant work, but it tends to slow their careers if they intend to stay at the same company (they learn a lot, but they don’t build internal credibility). The majority and the rest, who have no significant authority over what they work on, get a mix, but a lot of them get stuck with the uninteresting, unimportant work (and closed-allocation shops generate tons of that stuff) that exists for reasons rooted in managerial politics.

What are the problems with open allocation? The main issue with open allocation is that it seems harder to manage, because it requires managers to actively motivate people to do the important but unpleasant work. In closed allocation, people are told to do work “because I said so”. Either they do it, or they quit, or they get fired. It’s binary, which seems simple. There’s no appeal process when people fail projects or projects fail people– and no one ever knows which happened– and extra-hierarchical collaboration is “trimmed”, and efforts can be tracked by people who think a single spreadsheet can capture everything important about what is happening in the company. Closed-allocation shops have hierarchy and clear chains of command, and single-points-of-failure (because a person can be fired from a whole company for disagreeing with one manager) out the proverbial wazoo. They’re Soviet-style command economies that somehow ended up being implemented within supposedly “capitalist” companies, but they “appear” simple to manage, and that’s why they’re popular. The problem with closed allocation policies is that they lead to enormous project failure rates, inefficient allocation of time, talent bleeds, and unnecessary terminations. In the long term, all of this unplanned and surprising garbage work makes the manager’s job harder, more complex, and worse. When assessing the problems associated with open allocation (such as increased managerial complexity) it’s important to consider that the alternative is much worse.

How do you do it? The challenging part of open allocation is enticing people to do unpleasant projects. There needs to be a reward. Make the bounty too high, and people come in with the wrong motivations (capturing the outsized reward, rather than getting a fair reward while helping the company) and the perverse incentives can even lead to “rat farming” (creating messes in the hopes of being asked to repair them at a premium). Make it too low, and no one will do it, because no one wise likes a company well enough to risk her own career on a loser project (and part of what makes a bad project bad is that, absent recognition, it’s career-negative to do undesirable work). Make the reward too monetary and it looks bad on the balance sheet, and gossip is a risk: people will talk if they find out a 27-year-old was paid $800,00o in stock options (note: there had better be vesting applied) even if it’s justified in light of the legacy dragon being slain. Make it too career-focused and you have people getting promotions they might not deserve, because doing unpleasant work doesn’t necessarily give a person technical authority in all areas. It’s hard to get the carrot right. The appeal of closed allocation is that the stick is a much simpler tool: do this shit or I’ll fire you.

The project has to be “packaged”. It can’t be all unpleasant and menial work, and it needs to be structured to involve some of the leadership and architectural tasks necessary for the person completing it to actually deserve the promised promotion. It’s not “we’ll promote you because you did something grungy” but “if you can get together a team to do this, you’ll all get big bonuses, and you’ll get a promotion for leading it.” Management also needs to have technical insight on hand in order to do this: rather than doing grunt work as a recurring cost, kill it forever with automation.

An important notion in all this is that of a committed project. Effectively, this is what the executives should create if they spot a quantum of work that the business needs but that is difficult and does not seem to be enjoyable in the estimation of the engineers. These shouldn’t be created lightly. Substantial cash and stock bonuses (vested, over the expected duration of the project) and promotions are associated with completing these projects, and if more than 25% of the workload is committed projects, something’s being done wrong. A committed project offers high visibility (it’s damn important; we need this thing) and graduation into a leadership role. No one is “assigned” to a committed project. People “step up” and work on them because of the rewards. If you agree to work on a committed project, you’re expected to make a good-faith effort to see it through for an agreed-upon period of time (typically, a year). You do it no matter how bad it gets (unless you’re incapable) because that’s what leadership is. You should not “flake out” because you get bored. Your reputation is on the line.

Companies often delegate the important but undesirable work in an awkward way. The manager gets a certain credibility for taking on a grungy project, because he’s usually at a level where he has basic autonomy over his work and what kinds of projects he manages. If he can motivate a team to accomplish it, he gets a lot of credit for taking on the gnarly task. The workers, under closed allocation, get zilch. They were just doing their jobs. The consequence of this is that a lot of bodies end up buried by people who are showing just enough presence to remain in good standing, but putting the bulk of their effort into moving to something better. Usually, it’s new hires without leverage who get staffed on these bad projects.

I’d take a different approach to committed projects. Working on one requires (as the name implies) commitment. You shouldn’t flake out because something more attractive comes along. So only people who’ve proven themselves solid and reliable should be working on (much less leading) them. To work on one (beyond a 20%-time basis) you have to have been at the company for at least a year, senior enough for the leadership to believe that you have the ability to deliver, and in strong standing at the company. Unless hired at senior roles, I’d never let a junior hire take on a committed project unless it was absolutely required– too much risk.

How do you fire people? When I was in school, I enjoyed designing and playing with role-playing systems. Modeling a fantasy world is a lot of fun. Once I developed an elaborate health mechanic that differentiated fatigue, injury, pain, blood loss, and “magic fatigue” (which affected magic users) and aggregated them (determining attribute reductions and eventual incapacitation) in what I considered to be a novel way. One small detail I didn’t include was death, so the first question I got was, “How do you die?” Of course, blood loss and injuries could do it. In a no-magic, medieval world, loss of the head is an incapacitating and irreversible injury, and exsanguination is likewise. However, in a high-magic world, “death” is reversible. Getting roasted, eaten and digested by a dragon might be reversible. But there has to be a possibility (though it doesn’t require a dedicated game mechanic) for a character to actually die in the permanent, create-a-new-character sense of the word. Otherwise there’s no sense of risk in the game: it’s just rolling dice to see how fast you level up. My answer was to leave that decision to the GM. In horror campaigns, senseless death (and better yet, senseless insanity) is part of the environment. It’s a world in which everything is trying to kill you and random shit can end your quest. But in high-fantasy campaigns with magic and cinematic storylines, I’m averse to characters being “killed by the dice”. If the character is at the end of his story arc, or does something inane like putting his head in a dragon’s mouth because he’s level 27 and “can’t be killed”, then he dies for real. Not “0 hit points”, but the end of his earthly existence. But he shouldn’t die because the player is hapless enough to roll 4 “1″s in a row on a d10. Shit happens.

The major problem with “rank and yank” (stack-ranking with enforced culling rates) and especially closed allocation is that a lot of potentially great employees are killed by the dice. It becomes part of the rhythm of the company for good people to get inappropriate projects or unfair reviews, blow up mailing lists or otherwise damage morale when it pisses them off, then get fired or quit in a huff. Yawn… another one did that this week. As I alluded in my Valve essay, this is the Welch Effect: the ones who get fired under rank-and-yank policies are rarely low performers, but junior members of macroscopically underperforming teams (who rarely have anything to do with this underperformance). The only way to enforce closed allocation is to fire people who fail to conform to it, but this also means culling the unlucky whose low impact (for which they may not be at fault) appears like malicious noncompliance.

Make no mistake: closed allocation is as much about firing people as guns are about killing people. If people aren’t getting fired, many will work on what they want to anyway (ignoring their main projects) and closed allocation has no teeth. In closed allocation shops, firings become a way for the company to clean up its messes. “We screwed this guy over by putting him on the wrong project; let’s get rid of him before he pisses all over morale.” Firings and pseudo-firings (“performance improvement plans” and transfer blocks and intentional dead-end allocations) become common enough that they’re hard to ignore. People see them, and that they sometimes happen to good people. And they scare people, especially because the default in non-financial tech companies is to fire quickly (“fail fast”) and without severance. It’s a really bad arrangement.

Do open-allocation shops have to fire people? The answer is an obvious “yes”, but it should be damn rare. The general rule of good firing is: mentor subtracters, fire dividers. Subtracters are good-faith employees who aren’t pulling their weight. They try, but they’re not focused or skilled enough to produce work that would justify keeping them on the payroll. Yet. Most employees start as subtractors, and the amount of time it takes to become an adder varies. Most companies try to set guidelines for how long an employee is allowed to take to become an adder (usually about 6 months). I’d advise against setting a firm timeframe, because what’s important is now how fast a person has learned (she might have had a rocky start) but how fast, and more importantly how well, she can learn.

Subtracters are, except in an acute cash crisis when they must be laid off for business reasons, harmless. They contribute microscopically to the burn rate, but they’re usually producing some useful work, and getting better. They’ll be adders and multipliers soon. Dividers are the people who make whole teams (or possibly the whole company) less productive. Unethical people are dividers, but so are people whose work is of so low quality that messes are created for others, and people whose outsized egos produce conflicts. Long-term (18+ months) subtractors become “passive” dividers because of their morale effects, and have to be fired for the same reason. Dividers smash morale, and they’re severe culture threats. No matter how rich your company is and how badly you may want not to fire people, you have to get rid of dividers if they don’t reform immediately. Dividers ratchet up their toxicity until they are capable of taking down an entire company. Firing can be difficult because many dividers shine as individual contributors (“rock stars”) but taketh away in their effects on morale, but there’s no other option.

My philosophy of firing is that the decision should be made rarely, swiftly, for objective reasons, and with a severance package sufficient to cover the job search (unless the person did something illegal or formally unethical) that includes non-disclosure, non-litigation and non-disparagement. This isn’t about “rewarding failure”. It’s about limiting risk. When you draft “performance improvement plans” to justify termination without severance, you’re externalizing the cost to people who have to work with a divider who’s only going to get worse post-PIP. Companies escort fired employees out of the building, which is a harsh but necessary risk-limiting measure; but it’s insane to leave a PIP’d employee in access for two months. Moreover, when you cold-fire someone, you’re inviting disparagement, gossip, and lawsuits. Just pay the guy to go away. It’s the cheapest and lowest-variance option. Three months of severance and you never see the guy again. Good. Six months and you he speaks highly of you and your company: he had a rocky time, you took care of him, and he’s (probably) better-off now. (If you’re tight on money, which most startups are, stay closer to the 3-month mark. You need to keep expenses low more than you need fired employees to be your evangelists. If you’re really tight, replace the severance with a “gardening leave” package that continues his pay only until he starts his next job.)

If you don’t fire dividers, you end up with something that looks a lot like closed allocation. Dividers can be managers (a manager can only be a multiplier or divider, and in my experience, at least half are dividers) or subordinates, but dividers tend to intimidate. Subordinate passive dividers intimidate through non-compliance (they won’t get anything done) while active dividers either use interpersonal aggression or sabotage to threaten or upset people (often for no personal gain). Managerial (or proto-managerial) dividers tend to threaten career adversity (including bad reviews, negative gossip, and termination) in order to force people to put the manager’s career goals above their own. They can’t motivate through leadership, so they do it using intimidation and (if available) authority, and they draw people into captivity to get done the work they want, without paying for it on a fair market (i.e. providing an incentive to do the otherwise undesirable work). At this point, what you have is a closed-allocation company. What this means is that open allocation has to be protected: you do it by firing the threats.

If I were running a company, I think I’d have a 70% first-year “firing” (by which I mean removal from management; I’d allow lateral moves into IC roles for those who desired to do so) rate for titled managers. By “titled manager”, I mean someone with the authority and obligation to participate in dispute resolution, terminations and promotions, and packaging committed projects. Technical leadership opportunities would be available to anyone who could convince people to follow them, but to be a titled people manager you’d have to pass a high bar. (You’d have to be as good at it as I would be, and for 70 to 80 percent of the managers I’ve observed, I’d do a better job.) This high attrition rate would be offset by a few cultural factors and benefits. First, “failing” in the management course wouldn’t be stigmatized because it would be well-understood that most people either end it voluntarily, or aren’t asked to continue. People would be congratulated for trying out, and they’d still be just as eligible to lead projects– if they could convince others to follow. Second, those who aspired specifically to people-management and weren’t selected would be entitled (unless fully terminated for doing something unethical or damaging) to a six-month leave period in which they’d be permitted to represent themselves as employed. That’s what B+ and A- managers would get– the right to remain as individual contributors (at the same rank and pay) and, if they didn’t want that, a severance offer along with a strong reference if they wished to pursue people management in other companies– but not at this one.

Are there benefits to closed allocation? I can answer this with strong confidence. No, not in typical technology companies. None exist. The work that people are “forced” to do is of such low quality that, on balance, I’d say it provides zero expectancy. In commodity labor, poorly motivated employees are about half as productive as average ones, and the best are about twice as productive. Intimidating the degenerate slackers into bringing themselves up to 0.5x from zero makes sense. In white-collar work and especially in technology, those numbers seem to be closer to -5 and +20, not 0.5 and 2.

You need closed (or at least controlled) allocation over engineers if there is material proprietary information where even superficial details would represent, if divulged, an unacceptable breach: millions of dollars lost, company under existential threat, classified information leaked. You impose a “need-to-know” system over everything sensitive. However, this most often requires keeping untrusted, or just too many people, out of certain projects (which would be designated as committed projects under open allocation). It doesn’t require keeping people stuck on specific work. Full-on closed allocation is only necessary when there are regulatory requirements that demand it (in some financial cases) or extremely sensitive proprietary secrets involved in most of the work– and comments in public-domain algorithms don’t count (statistical arbitrage strategies do).

What does this mean? Fundamentally, this issue comes down to a simple rule: treat employees like adults, and that’s what they’ll be. Investment banks and hedge funds can’t implement total open allocation, so they make up the difference through high compensation (often at unambiguously adult levels) and prestige (which enables lateral promotions for those who don’t move up quickly). On the other hand, if you’re a tiny startup with 30-year-old executives, you can’t afford banking bonuses, and you don’t have the revolving door into $400k private equity and hedge fund positions that the top banks do, so employee autonomy (open allocation) is the only way for you to do it. If you want adults to work for you, you have to offer autonomy at a level currently considered (even in startups) to be extreme.

If you’re an engineer, you should keep an eye out for open-allocation companies, which will become more numerous as the Valve model proves itself repeatedly and all over the place (it will, because the alternative is a ridiculous and proven failure). Getting good work will improve your skills and, in the long run, your career. So work for open-allocation shops if you can. Or, you can work in a traditional closed-allocation company and hope you get (and continue to get) handed good projects. That means you work for (effectively, if not actually) a bank or a hedge fund, and that’s fine, but you should expect to be compensated accordingly for the reduction in autonomy. If you work for a closed-allocation ad exchange, you’re a hedge-fund trader and you deserve to be paid like one.

If you’re a technology executive, you need to seriously consider open allocation. You owe it to your employees to treat them like adults, and you’ll be pleasantly surprised to find that that’s what they become. You also owe it to your managers to free them from the administrative shit-work (headcount fights, PIPs and terminations) that closed allocation generates. Finally, you owe it to yourself; treat yourself to a company whose culture is actually worth caring about.