The world sucks at finding the right work for engineers.

This is directly in response to Matt Aimonetti’s “Engineers Suck at Finding the Right Jobs“, because I disagree that the problem resides solely with engineers. Rather, I think the problem is bilateral. An equally strong argument could be made that there’s an immense wealth of engineering talent (or, at least, potential) out there, but that our contemporary business leadership lacks the vision, creativity, and intelligence to do anything with it.

Don’t get me wrong: I basically agree with what he is saying. Most software engineers do a poor job at career management. A major part of the problem is that the old-style implicit contract between employers and employees has fallen to pieces, and people who try to follow the old rules will shortchange themselves and fail in their careers. In the old world, the best thing for a young person to do was to take a job– any job– at a reputable company and just be seen around the place, and eventually graduate into higher quality of work. Three to five years of dues-paying grunt work (that had little intrinsic career-building value, but fulfilled a certain social expectation) was the norm, but this cost only had to be paid once. The modern world is utterly different. Doing grunt work does nothing for your career. There are people who get great projects and advance quickly, and others who get bad projects and never get out of the mud. Our parents lived in a world where “90 percent is showing up”, whereas we live in one where frequent job changes are not only essential to a good career, but often involuntary.

Software engineering is full of “unknown unknowns”, which means that most of us have a very meager understanding of what we don’t know, and what our shortcomings are. We often don’t know what we’re missing. It’s also an arena in which the discrepancy between the excellent and the mediocre isn’t a 20 to 40 percent increase, but a factor of 2 to 100. Yet to become excellent, an engineer needs excellent work, and there isn’t much of that to go around, because most managers and executives have no vision. In fact, there’s so little excellent work in software that what little there is tends to be allocated as a political favor, not given to those with the most talent. The most important skill for a software engineer to learn, in the real world, is how to get allocated to the best projects. In fact, what I would say distinguishes the most successful engineers is that they develop, at an early age, the social skills to say “no” to career-negative grunt work without firing oneself in the process. That’s how they take the slack out of their careers and advance quickly.

That’s not the same as picking “the right jobs”, because engineers don’t actually apply to specific work sets when they seek employment, but to companies and managers. Bait-and-switch hiring practices are fiendishly common, and many companies are all-too-willing to allocate undesirable and even pointless work to people in the “captivity interval”, which tends to span from the 3rd to the 18th month of employment, at which point leaving will involve a damaging short job tenure on the resume. (At less than 3 months, the person has the option of omitting that job, medium-sized gaps being less damaging than short-term jobs, which suggest poor performance.) I actually don’t think there’s any evidence to indicate that software engineers do poorly at selecting companies. Where I think they are abysmal is at the skill of placing themselves on high-quality work once they get into these companies.

All of this said, this matter raises an interesting question: why is there so much low-quality work in software? I know this business well enough to know that there aren’t strong business reasons for it to be that way. High-quality work is, although more variable, much more profitable in general. Companies are shortchanging themselves as much as their engineers by having a low-quality workload. So why is there so little good work to go around?

I’ve come to the conclusion that most company’s workloads can be divided into four quadrants based on two variables. The first is whether the work is interesting or unpleasant. Obviously, “interestingness” is subjective, so I tend to assume that work should be called interesting if there is someone out there who would happily do it for no more (and possibly less) than a typical market salary. Some people don’t have the patience for robotics, but others love that kind of work, so I classify it in the “interesting” category, because I’m fairly confident that I could find someone who would love to do that kind of work. For many people, it’s “want-to” work. On the other hand, the general consensus is that there’s a lot of work that very few people would do, unless paid handsomely for it. That’s the unpleasant, “have-to” work.

The second variable is whether the work is essential or discretionary. Essential work involves a critical (and often existential) issue for the company. If it’s not done, and not done well, the company stands to lose a lot of money: millions to billions of dollars. Discretionary work, on the other hand, isn’t in the company’s critical path. It tends to be exploratory work, or support work that the firm could do without. For example, unproven research projects are discretionary, although they might become essential later on.

From these two variables, work can be divided into four quadrants:

Interesting and Essential (1st Quadrant): an example would be Search at Google. This work is highly coveted. It’s both relevant and rewarding, so it benefits an employee’s internal and external career goals. Sadly, there’s not a lot of this in most companies, and closed-allocation companies make it ridiculously hard to get it.

Unpleasant and Essential (2nd Quadrant): grungy tasks like maintenance of important legacy code. This is true “have-to” work: it’s not pleasant, but the company relies on it getting done. Boring or painful work generally doesn’t benefit an employee’s external career, so well-run companies compensate by putting bonuses and internal career benefits (visibility, credibility, promotions) on it: a market solution. These are “hero projects”.

Interesting and Discretionary (3rd Quadrant): often, this takes the form of self-directed research projects and is the domain of “20% time” and “hack days”. This tends to be useful for companies in the long term, but it’s not of immediate existential importance. Unless the project were so successful as to become essential, few people would get promoted based on their contributions in this quadrant. That said, a lot of this work has external career benefits, because it looks good to have done interesting stuff in the past, and engineers learn a lot by doing it.

Unpleasant and Discretionary (4th Quadrant): this work doesn’t look good in a promotion packet, and it’s unpleasant to perform. This is the slop work that most software engineers get because, in traditional managed companies, they don’t have the right to say “no” to their managers. The business value of this work is minimal and the total value (factoring in morale costs and legacy) is negative. 4th-Quadrant work is toxic sludge that should be avoided.

One of the reasons that I think open allocation is the only real option is that it eliminates the 4th-Quadrant work that otherwise dominates a corporate atmosphere. Under open allocation, engineers vote with their feet and tend to avoid the 4th-Quadrant death marches.

The downside of open allocation, from a managerial perspective, is that the non-coercive nature of such a regime means they have to incent people to work on 2nd-Quadrant work, often with promotions and large (six- or seven-figure) bonuses. It seems expensive. Closed allocation enables managers to get the “have-to” work done cheaply, but there’s a problem with that. Under closed allocation, people who are put on these unpleasant projects often get no real career compensation, because management doesn’t have to give them any. So the workers put on such projects feel put-upon and do a bad job of it. If the work is truly 2nd-Quadrant (i.e. essential) the company cannot afford to have it done poorly. It’s better to pay for it and get high quality than to coerce people into it and get garbage.

The other problem with closed allocation is that it eliminates the market mechanic (workers voting with their feet) that allows this quadrant structure to become visible at all, which means that management in closed-allocation companies won’t even know when it has a 4th-Quadrant project. The major reason why closed-allocation companies load up on the toxic 4th-Quadrant work is because they have no idea that it’s even there, nor how to get rid of it.

There’s no corporate benefit to 4th-Quadrant work. So what incentive is there to generate it? Middle management is to blame. Managers don’t care whether the work is essential or discretionary, because they just want the experience of “leading teams”. They’re willing to work on something less essential, where there’s less competition to lead the project (and also a higher chance of keeping one’s managerial role) because their careers benefit either way. They can still say they “led a team of 20 people”, regardless of what kind of work they actually oversaw. Middle managers tend to what little interesting stuff these discretionary projects have for themselves, placing themselves in the 3rd-Quadrant, while leaving the 4th-Quadrant work to their reports.

This is the essence of what’s wrong with corporate America. Closed allocation generates pointless work that (a) no one wants to do, and (b) provides no real business value to the company. It’s a bilateral lose-lose for the company and workers, existing only because it suits the needs of middlemen.

It’s common wisdom in software that 90 to 95 percent of software engineers are depressingly mediocre. I don’t know what the percentage is, but I find that to be fairly accurate, at least in concept. The bulk of software engineers are bad at their jobs. I disagree that this mediocrity is intrinsic. I think it’s a product of bad work environments, and the compounding effect of bad projects and discouragement over time. The reason there are so many incompetent software engineers out there is that the work they get is horrible. It’s not only that they lack the career-management skills to get better work; it’s also that good work isn’t available to them when they start out, and it becomes even less available over time as their skills decline and their motivation and energy levels head toward the crapper.

I don’t see any intrinsic reason why the world can’t have ten, or even a hundred, times as many competent software engineers as it has now, but the dumbed-down corporate environment that most engineers face will block that from coming to fruition.

There’s an incredible amount of potential engineering talent out there, and for the first time in human history, we have the technology to turn it into gold. Given this, why is so much of it being squandered?

The end of management

I come with good news. If I’m correct about the future of the world economy, the Era of Management is beginning to close, and will wind down over the next few decades. I’ve spent a lot of time thinking about these issues, and I’ve come to a few conclusions:

  1. The quality gap between the products of managed work and unmanaged work has reversed, with unmanaged work being superior by an increasing– at this point, impossible to ignore– amount. For one notable example, open-source software is now superior to gold-plated commercial tools. Creativity and motivation matter more than uniformity and control. This was not always the case, but it has become true and this trend is accelerating.
  2. This change is intrinsic and permanent. It is unnatural for people to manage or be managed, and the end of the managerial era is a return to a more natural motivational framework.
  3. Approaches to business that once seemed radical, such as Valve‘s open allocation policy, will soon enough be established as the only reasonable option. Starting with top technical companies, an with the trend later moving into a wide variety of industries, firms will discard traditional management in favor of intrinsic motivation as a means of getting the best quality of work from their people.

What’s going on? I believe that there’s a simple explanation for all of this.

“We will kill them with math”

Consider payoff curves for two model tasks, each as a function of the performance of the person completing it.

Performance | A Payoff | B Payoff |
-----------------------------------

5 (Superb)  |      150 |      500 |
4 (Great)   |      148 |      300 |
3 (Good)    |      145 |      120 |
2 (Fair)    |      135 |       40 |
1 (Poor)    |      100 |       10 |
0 (Awful)   |       50 |        0 |
-----------------------------------

What might this model? Task A represents easy work for which an average (“fair”) player can achieve 90 percent of the highest potential output:135 points out of a possible 150. An employee achieving only 50 percent of that maximum is clearly failing, and will probably be replaced, and there won’t be much variation between the people who make the cut. Task B represents difficult work for which there’s much more upside, but for which the probability of success is low. Average performers contribute very little, while the difference between “good” and “superb” is large. Task B’s curve might be more applicable to high-yield R&D work, in which a person would be considered highly successful if she had success in even 30 percent of the projects she set out to do, but it increasingly applies to disciplines like computer programming, where insight, taste, and vision are worth far more than commodity code. What matters, mathematically, is that Task A’s input-output relationship flattens as performance improves, while Task B’s accelerates. Task A’s curve is concave and Task B’s is convex. For Task A, the difference in return between an excellent and an average performer is minimal, but for Task B, it’s immense.

Does excellence matter? At most jobs, the answer has traditionally been “no”. At least, it has mattered far less than uniformity, reliability, and cost reduction. The concave behavior of Task A is more appropriate to most jobs than a convex one, and that’s largely by design. The problem with creative excellence is that it’s intermittent. Creativity can’t be managed into existence, while reliable mediocrity can be. As much as we might want managers to “nurture creativity”, the fact is that they work for companies, not subordinate employees, and their job is largely to limit risk. If we expect managers to do anything different, we’re being unreasonable. For Task A, performance-middling behaviors like micromanagement are highly appropriate, because bringing the slackers into line provides much more benefit than is lost by irritating high performers, and most industrial work that humans have performed, over our history, has been more like A than B. Getting the work done has mattered more than doing it well.

One of the interesting differences between concave and convex work is the relationship between expectancy (average performance) and variance. For traditional concave work, there’s a lot of variation at the low-performing end of the curve, but very little among high performers. To consider variance uniformly bad, therefore, will not be detrimental, the upside of variation being so minimal. Managerial activities that reduce variance are generally beneficial under such a regime. Even if high performers are constrained, this is offset by the improved productivity of the slackers. For convex work, the opposite is true. In a convex world, variation and expectancy are positively correlated. It turns out to be much easier, for a manager, to control variance than it is to improve expectancy. For this reason, almost everything in the discipline of “management” that has formed over the past hundred years has been focused on risk reduction. In a concave world, that worked. Reducing variance, while it might regress individual performances into mediocrity, would nonetheless bring the aggregate team performance up to a level where no one could reliably do better with comparable inputs. For most of industrial humanity’s history, that was enough.

Variance reduction falls flat in the convex world. Managerial pressures that bring individual performance to the middle don’t guarantee that a company has an “average” number of high-performing people, but make it likely that the firm has zero such people, and the result of such mediocrity is an end to innovation. In the short term, this damage is invisible, but in the long term, it renders the company unable to compete. Its prominence and market share will be snapped up in small pieces by smaller, more agile, companies until nothing is left for it but dominance over low-margin “commodity” work. Contrary to the typical depiction of large corporate behemoths being sunk wholesale by a startup “<X> killer”, what actually tends to happen is a gradual erosion of that company’s dominance as new entrants compete against it for something more important, in the long run, than market share: talent. Talent is naturally attracted to convex, risk-friendly work environments.

For a digression into applied mathematics– specifically, optimization– I would like to point out that since maximizing a concave function (such as bulk productivity) is equivalent to minimizing a convex one, we can think of management in the concave world as somewhat akin to a convex optimization problem. This is more of a metaphor than a true isomorphism, with one being abstract mathematics and the other rooted in human psychology, but I think the metaphor’s quite useful. I’ll gloss over a lot of detail and just say this: convex optimizations (again, akin to management of concave work) are easier. A convex minimization problem is like finding the bottom of a bowl (follow gravity, or the gradient). However, if the problem is non-convex, the surface might be more convoluted, with local valleys, and one might end up in a suboptimal place (local minimum) from which no incremental improvement is possible. The first category of problem can be solved using an algorithm called gradient descent: start somewhere, and iterate by stepping in the direction that, locally, appears best. The second category of problem can’t be solved  by simple gradient descent. One can fall into a local optimum from which some sort of non-local insight (I’ll return to this, later) is required if one wants to improve.

Concave and convex work are, in kind, also sociologically different. When the work is concave, the optimization problem is (loosely speaking) convex, and the one stable equilibrium (or local optimum) is, roughly speaking, “fairness”. On average, you’ll get more if you focus your efforts on improving low performers (who will improve more quickly) than by making the best even better. A policy that often works is to standardize performance: figure out how many widgets people can produce, and develop a strategy for bringing as many people as possible to that level (and firing the few who can’t make it). Slackers are intimidated into acceptable mediocrity, incorrigible incompetents are fired, and the bulk of workers get exactly the amount of support they need to reliably hit their widget quota. It’s a “one-size-fits-all” approach that, while imperfect, has worked well for a wide variety of industrial work.

Management of convex work is, as it were, a distinctly non-convex optimization problem. It’s sociologically much more complicated, because while the concave world has a “fairness” equilibrium, convex work has multiple equilibria that are usually “unfair”. You end up with winners and losers, and the winners need to be paired with the best projects, roles and mentors, although one might argue that the winners “don’t need them” from a fairness perspective. For convex work, you don’t manage to the middle. The stars who get more support and better opportunities will improve faster, and the schlubs’ mediocrity (whether a result of inability or poor political position) will persist. The best strategy, for a managed company, would be to figure out who has “star” potential and invest heavily in them from the start, but the measurements involved (especially because people have such strong incentives to game them) are effectively impossible for most people to make, both for intrinsic and political reasons.

For convex work, excellence and creativity matter, and they can’t be forced into existence by giving orders. Additionally, the value produced in convex work is almost impossible to measure on a piece-rate basis. Achievements in concave work tend to be separable: one can determine exactly how much was accomplished in each hour, day, and week, so it’s easy to see when people are slacking off. Work that is separable by time is usually also separable by person: visible low performers can be removed, because the group’s performance is strictly additive of individual productivity. For convex work, this is nearly impossible. A person can seem nonproductive while generating ideas that lead to the next breakthrough– the archetypical “overnight success” that takes years– and a colleague who might not be publishing notable papers may still contribute to the group in an important, but invisible, way.

If your tools are traditional management tactics, then convex work is intractable, and management is often counterproductive. I think the best metaphor that I can come up with for managers and executives is “trading boredom”. There are many traders out there who could turn a profit if they stuck to what they knew well, but get bored with “grinding” and start to depart from their domains of competence, adding noise and room for mistakes, and burning up their winnings in the process. Poker players have the same problem: the game gets so boring (at 2000-3000 hours per year) that they start taking unwise risks. The 40-hour work week is so ingrained in modern people that there’s often a powerful guilt people face of feeling useless when there is no work for them to do (even if they achieve enough within 10 of those hours to “earn their keep”) and this often leads to counterproductive effort. This, I believe, explains 90 percent of managerial activity: messing with something that already works well, because watching the gauges gets boring. Whenever an executive comes up with a hare-brained HR policy that the company doesn’t need, trading boredom, and the need to still feel useful when there is no appropriate work to do, is the cause.

At concave work, this managerial “trading boredom” is a hassle that veteran workers, who have been doing the job for decades, learn to ignore. They already know how to do their jobs better than their bosses do, so they show enough compliance to keep management off their backs, but change little about what they’re actually doing unless there’s a legitimate reason for the change. They keep on working, and the function of the team or factory remains intact. For convex work, on the other hand, managerial meddling is utterly destabilizing. The pointless meetings and disruptions inflicted by overmanagement take an enormous toll. In a convex world where small differences in performance lead to large discrepancies in returns, spending 2 hours each week in pointless meetings isn’t going to reduce output by a mere 5 percent, as one might expect from a linear model (2 hours lost out of 40). It’s probably closer to 25 percent.

The corporate hierarchy: an analytical perspective.

The optimization metaphor above, I believe, explains certain functional reasons for the typical three-tiered corporate hierarchy, with executives, managers, and workers. The workers are just “inputs”– machines made of meat, with varying degrees of reliability and quality, and for which there exist well-studied psychological strategies for reducing variance in performance in order to impose as much uniformity as possible. A manager‘s job is to focus on a small region over which the optimization problem is convex (which implies that the work is concave) and perform the above-mentioned gradient descent, or to iterate step-wise toward a local optimum. The strategy is given to the manager from above, and his job is to drive execution error as close to zero as possible. As variance will, all else being equal, contribute to execution error, variance must be limited as well. The job of an executive is to have the non-local insight and knowledge required to find a global optimum rather than being stuck at a local one. Executives ask non-local “vision” questions like, “Should we make toothpaste or video games?” Managers figure out what it will take to get a group of people to produce 2 percent more toothpaste.

This hierarchy is becoming obsolete. Machines are now outperforming us at the mechanical work that defined the bottom of the traditional, three-tier hierarchy. They are far more reliable and uniform than we could ever hope to be, and they excel at menial, concave work. We can’t hope to compete with them on this; we’ll need to let them have this one. So the bottom of the three tiers is being replaced outright. In addition, specialization has created a world where there is no place for mediocrity, and, therefore, in which the individual “worker” is now responsible for finding a direction of development (a non-local, executive objective) and planning her own path to excellence (a managerial task). The most effective people have figured this out by now and become “self-executive”, which means that they take responsibility for their own advancement, and prioritize it over stated job objectives. As far as they’re concerned, their real job is to become excellent at something, and they will focus on their career rather than their immediate job responsibilities, which they perform only because it helps their career to do so. As far as self-executive people are concerned, their employers are just along for the ride– funding them, and capturing some the byproducts they generate along the way, while they work mostly on their real job: becoming really good at something, and advancing their career.

Self-executive employees are a nightmare for traditional managers. They favor their career growth over their at-moment responsibilities, have no respect for the transient managerial authority that might be used to compel them to depart from their interests, yet tend at the same time to be visibly highly competent, which means that firing them is a political mess. They’re the easiest to fire from an HR “cover your ass” perspective (they won’t sue if you fire them, because they’ll quickly get an external promotion) but the damage to morale in losing them is substantial. In concave work, a team could lose its most productive member with minimal disruption; at convex work, such a loss is crippling. Managers who want to peaceably remove such people have to make it look like something the group wanted, and so they create divisions between the self-executive and colleagues– perhaps by setting unrealistic deadlines and then citing the self-executive person’s extracurricular education as a cause for slippage– but these campaigns are disastrous for group performance in the long run.

From a corporate perspective, a self-executive employee is the opposite of a “team player” and possibly even a sociopath, but I prefer to call the self-executive attitude adaptive. What point is there in being a “team player” when that “team” will be a different set of people in 36 months, and where one can be discarded from the team at any time, often unilaterally by a non-productive player who’s not even a real part of it? None that I see. The “team player” ethic is for chumps who haven’t figured it out yet. Additionally, because the working world is increasingly convex, self-executive people are increasingly good (if chaotic good, to use a role-playing analogy) for society. They sometimes annoy their bosses, but they become extremely competent in the process and, in the long term, they will advance the world far more than anyone can do by following orders. Self-executives tend to “steal an education” from their bosses and companies, but twenty years later, they’re building superior companies.

Self-executive employees want to take risks. They want to tackle hard problems, so they get better at what they do. While managers want to reduce variance, almost obsessively, self-executives want to increase it. Also relevant is the fact that managerial fictions about intrinsic “A”, “B”, and “C” players don’t exist. Stack-ranking– the annual “low performer” witch hunt that companies engage in to scare their middling crowd– doesn’t actually do much good in personnel selection. (It excels at intimidation, which is performance-middling and thus reduces variance, but the desirability of this effect is rapidly declining.) What does exist, and seems to be intrinsic, is that there are low- and high-variance people. Low-variance workers can kick out an acceptable performance under almost any circumstances– long hours, poor health, boring work, difficult or even aggressive clients and managers. They’re reliable, not especially creative, and tend to do well at the war of attrition known as the “corporate ladder”. They make lousy executives, but are most likely to be selected for those sorts of roles. High-variance people, on the other hand, are much more creative, and tend to be self-executive, but are much less reliable in the context of managed work. Their level of output is very high if measured over a long enough timeframe, but impossible to read at the level of a single day, or even a quarter. This distinction of variance, much more than the A- and B-player junk science, seems to be intrinsic or, at the least, very slow to change for specific individuals. Unfortunately, traditional managers and high-variance individuals are natural enemies. Low-variance people tend to be selected for management positions, are easiest to manage, and (most importantly) are less likely to make their bosses insecure.

What is changing

Why is work moving from concavity to convexity in output? There are a few answers, all tightly connected. The first of these is that concave work tends to have a defined maximum value: there’s one right way to perform the task. If we can define a target, we can structure the task as a computation, and it can be automated. Machines win, no contest, at reliability as well as cost-reduction. They’re ideal workers. They never complain, work 168-hour weeks, and don’t have hidden career objectives that they place at a higher priority than what they’re asked to do. As we get better at programming machines to perform the concave work, it leaves us with the convex stuff.

Second, the technological economy enhances potential individual productivity. The best programmers deliver tens of millions of dollars per year in business value, while the worst should probably not be employed at all. The capacity to have multiplier effects across a company, rather than the additive impact of a mere workhorse, is no longer the domain of managers only. The best software architects and engineers are also multipliers, because their contributions become infrastructural in nature. I don’t think that this potential for multiplicative impact is limited to software, either. As software becomes more capable of eliminating menial tasks from peoples’ days, there’s more time available for the high-yield, high-risk endeavors at which machines do poorly. What this enables is the potential for rapid growth.

When studying finance, one often learns that high rates of growth (8% per year) in a portfolio are “unsustainable”, because anything that grows so fast will eventually “outgrow” the entire world economy, which grows at only 3 to 5 percent, as if that latter rate were an immutable maximum. This might also apply to the 10-15% per year salary growth that young people expect in their careers– also unsustainable. Wall Street (in terms of compensation) has been called “a bubble” for this reason: even average bankers experience this kind of exponential compensation growth well into middle age, and it seems that this is unreasonable, because even small-scale economies or subsectors “can’t” grow that fast, so how can a person? Can someone actually increase the business value of his knowledge and capability by 15% per year, for 40 years? It seems that there “must be” some limiting factor. I no longer believe this to be necessarily true at our current scale. (There are physical, information-theoretic upper limits to economic prosperity, but we’re so far from those that we can ignore them in the context of our natural lifespans.) Certainly, rapid growth becomes harder to maintain at scale; that is empirically true. But who says that world economic growth can’t some day reach 10% (or 50%) per year and continue at such a rate until we reach a physical maximum (far past a “post-scarcity” level at which we stop caring)? Before 1750, growth at a rate higher than 0.5% per year would have been considered impossible: 0.1 to 0.2 percent, in agrarian times, was typical. If we view our entire history in stages– evolutionary, early human, agricultural, industrial– we observe that growth rates improve at each stage. It’s faster-than-exponential. I don’t believe in a single point of nearly-infinite growth– a “Singularity”– but I think that human development is more likely than not to accelerate for the foreseeable future. In the technological era, rapid improvements are increasingly possible. Whether this will result in rapid (30% per year) macroscopic economic growth I am not qualified to say, and I don’t think anyone has the long-term answer on that one, but we are certainly in a time when local improvements on that order are commonplace. Many startups consider user growth of 10% per month to be slow.

Rapid growth and process improvements require investment into convex work, which often lacks a short-term payoff but often provides an immense upside. It’s this kind of thinking that companies need if they wish to grow at technological rather than industrial rates, and traditional variance-reduction management is at odds with that. That said, traditional management is quite a strong force in corporate America. Most companies cannot even imagine how they would run their affairs without it. For sure, the managerial and executive elites won’t go gently into that good night. The private-sector career politicians who’ve spent decades mastering this inefficient, archaic, and often stupid system are not going to give up the position they’ve worked so hard to acquire. The macroscopic economic, social, and cultural benefits to a less-managed work world are extreme, but also irrelevant to the current masters, who have a personal desire to keep their dominance over other humans. The people in charge of the current system would rather reign in hell than serve in heaven. So what will give?

There won’t be an “extinction event” for managerial dinosaurs and the numerous corporations that have adopted their mentality, so much as an inability to compete. First, consider the superior quality of open-source software over commercial alternatives for an expanding set of software. That’s indicative. Open-source projects grow organically because people value them and willingly contribute, with no managers (in the industrial-era sense) needed. Commercial products die unless their owners continue to throw money at them (and sometimes even then). Open-source contributors are intrinsically motivated to be invested in the quality of their software. They’re often users of the product, and they can also improve their careers by gaining visibility in the wider software world. They have real, technological-era, self-executive motivations for wanting to do good work. For a contrast, most commercial software products are completed at a standard of “just good enough” to appease a boss and remain in good standing. It’s software written for managers, but from a product-quality standpoint, bosses themselves rarely matter. Users do. The quality gap between non-managed work and managed work is becoming so severe that the value of managed work is (albeit slowly) declining, out-competed by superior alternatives. This is bringing us to a state where “radical” cultures such as Valve’s purportedly manager-free open allocation policy become the only acceptable option. I would be shocked, in 30 years, if open allocation weren’t the norm at leading technology companies.

The truth is that managed work and variance reduction, which made the Industrial Revolution possible, are capable of producing growth at industrial (1 to 5 percent per year) rates, but not at technological rates (and a venture-funded startup must grow at technological rates or it will die). Compared to the baseline agrarian growth rate (0.05 to 0.3% per year) of antiquity, the industrial rate was rapid. Traditional management still works just fine, if your job is to turn $1.00 into $1.03 in twelve months. If you’re already rich and looking to generate some income from your massive supply of capital, this might continue to work for you indefinitely. If you’re poor, or looking to compete in the most exciting industries, and you need to unlock the energies that turn $1.00 into $2.00, you need something different.

Does this mean that there will no longer be no role for managers? It depends on how “manager” is defined. Leadership, communication, and mentorship skills will always be in high demand. In fact, the increasing complexity of technology will put education at a premium, and the few people who can lead groups of self-executive workers are becoming immensely valuable. Although the most talented workers will evolve into self-executive free agents, they will need some way of learning what efforts are worth their time, and they’ll be learning this from other people. Some aspects of “management” will always be important, but to the extent that management lives on, it will have to be about genuine leadership rather than authority.

Fossil fools

What has a one-way ticket to the tarpit (and almost no one will miss it) is the contemporary institution of corporate management: the Bill Lumbergh, who uses authority by executive endowment to compensate for his complete lack of leadership skills.

Leaders are chosen by a group that decides to be led, whereas corporate managers are puppet governors selected by external forces (or “from above”) as a means of exerting control. They don’t have to have any leadership skill, because the people being led have no choice. They’re hand-picked by their bosses: higher-level managers and executives. Leadership often requires handling trade-offs of peoples’ interests in a fair way, but that’s impossible for a corporate manager to do. Executives will never select a manager who would support the workers’ interests at an equal level to their own. Managers play a variety of roles, but their main one is to be a buffer between two groups of people (executives and workers) who would otherwise be at opposition because, even if for no other reason, they get dramatically different proportions of the reward. Managers legitimize executives by creating the impression that the separation between the company’s real members and its workers is a continuous spectrum (and thereby support the company’s efforts to mislead people regarding their true chances of upward mobility) rather than a discrete chasm, but they also form, because of their own increasingly divergent interests, a weak link that is increasingly problematic.

The corporate structure is effectively feudal. Just as medieval kings never cared how the dukes and earls treated their peasants, as long as tributes were paid, managers generally have unilateral power over the managed (as long as they don’t get the company sued). Managers are trusted to execute the corporate interest. It seems like this should create a weak link, giving managers the power to force workers to suit the manager’s career goals rather than the corporate objective. Perhaps surprisingly, though, in a concave world this doesn’t cause a major problem for the company. Managerial and corporate interests, at concave work, are aligned for reasons of sociological coincidence.

Managers have high social status and status is positively autocorrelated in time (that is, high status tends to reinforce itself) so a manager will “drift” into higher position as the group evolves, so long as nothing embarrasses him. Workers have to prove themselves in order to stand above the masses, but managers can coast and acquire seniority. In other words, a manager’s career is optimized not when he maximizes group productivity (which may be impossible to measure) but when he minimizes risk of embarrassment. A subordinate who breaks rules, no matter how trivial, embarrasses the manager– even if he’s highly productive. It’s better, from a manager’s career perspective, to have ten thoroughly average reports than nine good ones and one visible problem employee. Great employees make themselves look good, while bad ones are taken to reflect on the manager who failed to keep them in line. The consequence of this is that managers are encouraged toward risk-reduction. In a concave world, this is exactly what the company needs. So what happens in a convex world?

The convex world is different. If a company “gets” convexity (which is rare) it will begin to make allowances for individual contributors to allocate time to high-variance, high-reward activities which are often self-directed. This gives workers the opportunity to achieve visible high performance, and it’s good for the (expected) corporate profit, but managers lose out, because the worker who hits a home run will get most of the credit, not the manager. They find their subordinates increasingly interested in “side projects” and extra-hierarchical pursuits that “distract” them from their assigned work. There’s a conflict of interest that emerges between what the worker perceives as managerial mediocrity and the quest for the larger-scale excellence that can exist in convex pursuits. Because self-executive workers think non-locally (extra-hierarchical collaboration, self-directed career advancement) the appearance is that they’re jumping rank.

In the concave world, managers were the tactical muscle of the company. They drove the workers toward the local optimum in the neighborhood that the globally-oriented executives chose. In the convex world, managers are pesky middlemen. If they operate according to self-interest (and it’s unreasonable to expect otherwise of anyone) then their best bet is to use their authority to coerce their reports to prioritize the manager’s own career goals, which have now diverged from the larger objective. In other words, they become extortionists. That’s not a business that will live for much longer.

What seems clear is that middle management will decline in power and importance over the next 50 years. Increasingly convex work landscapes will decrease the use for it, and people will have less desire to fill such positions (which, in concave work, are coveted). What’s less clear is what will replace it. If this corporate middle class disappears within the current framework, what’s left is a two-tier system with workers and executives. That’s a problem. Two-class societies are extremely unstable, so I don’t think that arrangement will thrive. What’s more likely, in my (perhaps overly optimistic) opinion is that the functions of workers, managers, and executives will all blend together as individuals becomes increasingly self-executive. In many ways, this is a desirable outcome. However, it dramatically changes the styles of business that people will be able to form, making companies fairer and more creative, but also more chaotic and probably smaller. If the result of this is a macroscopic increase in creativity, progress, and individual freedom over one’s work, then a truly technological era might begin.

What Programmers Want

Most people who have been assigned the unfortunate task of managing programmers have no idea how to motivate them. They believe that the small perks (such as foosball tables) and bonuses that work in more relaxed settings will compensate for more severe hindrances like distracting work environments, low autonomy, poor tools, unreasonable deadlines, and pointless projects. They’re wrong. However, this is one of the most important things to get right, for two reasons. The first is that programmer output is multiplicative of a number of factors– fit with tools and project, skill and experience, talent, group cohesion, and motivation. Each of these can have a major effect (plus or minus a factor of 2 at least) on impact, and engineer motivation is one that a manager can actually influence. The second is that measuring individual performance among software engineers is very hard. I would say that it’s almost impossible and in practical terms, economically infeasible. Why do I call infeasible rather than merely difficult? That’s because the only people who can reliably measure individual performance in software are so good that it’s almost never worth their time to have them doing that kind of work. If the best engineers have time to spend with their juniors, it’s more worthwhile to have them mentoring the others (which means their interests will align with the employees rather than the company trying to perform such measurement) than measuring them, the latter being a task they will resent having assigned to them.

Seasoned programmers can tell very quickly which ones are smart, capable, and skilled– the all-day, technical interviews characteristic of the most selective companies achieve that– but individual performance on-site is almost impossible to assess. Software is too complex for management to reliably separate bad environmental factors and projects from bad employees, much less willful underperformance from no-fault lack-of-fit. So measurement-based HR policies add noise and piss people off but achieve very little, because the measurements on which they rely are impossible to make with any accuracy. This means that the only effective strategy is to motivate engineers, because attempting to measure and control performance after-the-fact won’t work.

Traditional, intimidation-based management backfires in technology. To see why, consider the difference between 19th-century commodity labor and 21st-century technological work. For commodity labor, there’s a level of output one can expect from a good-faith, hard-working employee of average capability: the standard. Index that to the number 100. There are some who might run at 150-200, but often they are cutting corners or working in unsafe ways, so often the best overall performers might produce 125. (If the standard were so low that people could risklessly achieve 150, the company would raise the standard.) The slackers will go all the way down to zero if they can get away with it. In this world, one slacker cancels out four good employees, and intimidation-based management– which annoys the high performers and reduces their effectiveness, but brings the slackers in line, having a performance-middling effect across the board– can often work. Intimidation can pay off, because more is gained by intimidating the slacker into mediocrity brings more benefit than is lost. Technology is different. The best software engineers are not at 125 or even 200, but at 500, 1000, and in some cases, 5000+. Also, their response to a negative environment isn’t mere performance middling. They leave. Engineers don’t object to the firing of genuine problem employees (we end up having to clean up their messes) but typical HR junk science (stack ranking, enforced firing percentages, transfer blocks against no-fault lack-of-fit employees) disgusts us. It’s mean-spirited and it’s not how we like to do things. Intimidation doesn’t work, because we’ll quit. Intrinsic motivation is the only option.

Bonuses rarely motivate engineers either, because the bonuses given to entice engineers to put up with undesirable circumstances are often, quite frankly, two or three orders of magnitude too low. We value interesting work more than a few thousand dollars, and there are economic reasons for us doing so. First, we understand that bad projects entail a wide variety of risks. Even when our work isn’t intellectually stimulating, it’s still often difficult, and unrewarding but difficult work can lead to burnout. Undesirable projects often have a 20%-per-year burnout rate between firings, health-based attrition, project failure leading to loss of status, and just plain losing all motivation to continue. A $5,000 bonus doesn’t come close to compensating for a 20% chance of losing one’s job in a year. Additionally, there are the career-related issues associated with taking low-quality work. Engineers who don’t keep current lose ground, and this becomes even more of a problem with age. Software engineers are acutely aware of the need to establish themselves as demonstrably excellent before the age of 40, at which point mediocre engineers (and a great engineer becomes mediocre after too much mediocre experience) start to see prospects fade.

The truth is that typical HR mechanisms don’t work at all in motivating software engineers. Small bonuses won’t convince them to work differently, and firing middling performers (as opposed to the few who are actively toxic) to instill fear will drive out the best, who will flee the cultural fallout of the firings. There is no way around it: the only thing that will bring peak performance out of programmers is to actually make them happy to go to work. So what do software engineers need?

The approach I’m going to take is based on timeframes. Consider, for an aside, peoples’ needs for rest and time off. People need breaks at  work– say, 10 minutes every two hours. They also need 2 to 4 hours of leisure time each day. They need 2 to 3 days per week off entirely. They need (but, sadly, don’t often get) 4 to 6 weeks of vacation per year. And ideally, they’d have sabbaticals– a year off every 7 or so to focus on something different from the assigned work. There’s a fractal, self-similar nature to peoples’ need for rest and refreshment, and these needs for breaks tap into Maslovian needs: biological ones for the short-timeframe breaks and higher, holistic needs pertaining to the longer timeframes. I’m going to assert that something similar exists with regard to motivation, and examine six timeframes: minutes, hours, days, weeks, months, and years.

1. O(minutes): Flow

This may be the most important. Flow is a state of consciousness characterized by intense focus on a challenging problem. It’s a large part of what makes, for example, games enjoyable. It impels us toward productive activities, such as writing, cooking, exercise, and programming. It’s something we need for deep-seated psychological reasons, and when people don’t get it, they tend to become bored, bitter, and neurotic. For a word of warning, while flow can be productive, it can also be destructive if directed toward the wrong purposes. Gambling and video game addictions are probably reinforced, in part, by the anxiolytic properties of the flow state. In general, however, flow is a good thing, and the only thing that keeps people able to stand the office existence is the ability to get into flow and work.

Programming is all about flow. That’s a major part of why we do it. Software engineers get their best work done in a state of flow, but unfortunately, flow isn’t common for most. I would say that the median software engineer spends 15 to 120 minutes per week in a state of flow, and some never get into it. The rest of the time is lost to meetings, interruptions, breakages caused by inappropriate tools or bad legacy code, and context switches. Even short interruptions can easily cause a 30-minute loss. I’ve seen managers harass engineers with frequent status pings (2 to 4 per day) resulting in a collapse of productivity (which leads to more aggressive management, creating a death spiral). The typical office environment, in truth, is quite hostile to flow.

To achieve flow, engineers have to trust their environment. They have to believe that (barring an issue of very high priority and emergency) they won’t be interrupted by managers or co-workers. They need to have faith that their tools won’t break and that their environment hasn’t changed in a catastrophic way due to a mistake or an ill-advised change by programmers responsible for another component. If they’re “on alert”, they won’t get into flow and won’t be very productive. Since most office cultures grew up in the early 20th century when the going philosophy was that workers had to be intimidated or they would slack off, the result is not much flow and low productivity.

What tasks encourage or break flow is a complex question. Debugging can break flow, or it can be flowful. For me, I enjoy it (and can maintain flow) when the debugging is teaching me something new about a system I care about (especially if it’s my own). It’s rare, though, that an engineer can achieve flow while maintaining badly written code, which is a major reason why they tend to prefer new development over maintenance. Trying to understand bad software (and most in-house corporate software is terrible) creates a lot of “pinging” for the unfortunate person who has to absorb several disparate contexts in order to make sense of what’s going on. Reading good code is like reading a well-written academic paper: an opportunity to see how a problem was solved, with some obvious effort put into presentation and aesthetics of the solution. It’s actually quite enjoyable. On the other hand, reading bad code is like reading 100 paragraphs, all clipped from different sources. There’s no coherency or aesthetics, and the flow-inducing “click” (or “aha” experience) when a person makes a connection between two concepts almost never occurs. The problem with reading code is that, although good code is educational, there’s very little learned in reading bad code aside from the parochial idiosyncracies of a badly-designed system, and there’s a hell of a lot of bad code out there.

Perhaps surprisingly, whether a programmer can achieve “flow”, which will influence her minute-by-minute happiness, has almost nothing to do with the macroscopic interestingness of the project or company’s mission. Programmers, left alone, can achieve flow and be happy writing the sorts of enterprise business apps that they’re “supposed” to hate. And if their environment is so broken that flow is impossible, the most interesting, challenging, or “sexy” work won’t change that. Once, I saw someone leave an otherwise ideal machine learning quant job because of “boredom”, and I’m pretty sure his boredom had nothing to do with the quality of the work (which was enviable) but with the extremely noisy environment of a trading desk.

This also explains why “snobby” elite programmers tend to hate IDEs, the Windows operating system, and anything that forces them to use the mouse when key-combinations would suffice. Using the mouse and fiddling with windows can break flow. Keyboarding doesn’t. Of course, there are times when the mouse and GUI are superior. Web surfing is one example, and writing blog posts (WordPress instead of emacs) is another. Programming, on the other hand, is done using the keyboard, not drag-and-drop menus. The latter are a major distraction.

2. O(hours): Feedback

Flow is the essence here, but what keeps the “flow” going? The environmental needs are discussed above, but some sorts of work are more conducive to flow than others. People need a quick feedback cycle. One of the reasons that “data science” and machine learning projects are so highly desired is that there’s a lot of feedback, in comparison to enterprise projects which are developed in one world over months (with no real-world feedback) and released in another. It’s objective, and it comes on a daily basis. You can run your algorithms against real data and watch your search strategies unfold in front of you while your residual sum of squares (error) decreases.

Feedback needs to be objective or positive in order to keep people enthusiastic about their work. Positive feedback is always pleasant, so long as it’s meaningful. Objective, negative feedback can be useful as well. For example, debugging can be fun, because it points out one’s own mistakes and enables a person to improve the program. The same holds for problems that turn out to be more difficult than originally expected: it’s painful, but something is learned. What never works well is subjective, negative feedback (such as bad performance reviews, or aggressive micromanagement that shows a lack of trust). That pisses people off.

I think it should go without saying that this style of “feedback” can’t be explicitly provided on an hourly basis, because it’s unreasonable to expect managers to do that much work (or an employee do put up with such a high frequency of managerial interruption). So, the feedback has to come organically from the work itself. This means there need to be genuine challenges and achievements involved. So most of this feedback is “passive”, by which I mean there is nothing the company or manager does to inject the feedback into the process. The engineer’s experience completing the work provides the feedback itself.

One source of frustration and negative feedback that I consider subjective (and therefore place in that “negative, subjective feedback that pisses people off” category) is the jarring experience of working with badly-designed software. Good software is easy to use and makes the user feel more intelligent. Bad software is hard to use, often impossible to read at the source level, and makes the user or reader feel absolutely stupid. When you have this experience, it’s hard to tell if you are rejecting the ugly code (because it is distasteful) or if it is rejecting you (because you’re not smart enough to understand it). Well, I would say that it doesn’t matter. If the code “rejects” competent developers, it’s shitty code. Fix it.

The “feedback rate” is at the heart of many language debates. High-productivity languages like Python, Scala, and Clojure, allow programmers to implement significant functionality in mere hours. On my best projects, I’ve written 500 lines of good code in a day (by the corporate standard, that’s about two months of an engineer’s time). That provides a lot of feedback very quickly and establishes a virtuous cycle: feedback leads to engagement, which leads to flow, which leads to productivity, which leads to more feedback. With lower-level languages like C and Java– which are sometimes the absolute right tools to use for one’s problem, especially when tight control of performance is needed– macroscopic progress is usually a lot slower. This isn’t an issue, if the performance metric the programmer cares about lives at a lower level (e.g. speed of execution, limited memory use) and the tools available to her are giving good indications of her success. Then there is enough feedback. There’s nothing innate that makes Clojure more “flow-ful” than C; it’s just more rewarding to use Clojure if one is focused on macroscopic development, while C is more rewarding (in fact, probably the only language that’s appropriate) when one is focused on a certain class of performance concerns that require attention to low-level details. The problem is that when people use inappropriate tools (e.g. C++ for complex, but not latency-sensitive, web applications) they are unable to get useful feedback, in a timely manner, about the performance of their solutions.

Feedback is at the heart of the “gameification” obsession that has grown up of late, but in my opinion, it should be absolutely unnecessary. “Gameifcation” feels, to me, like an after-the-fact patch if not an apology, when fundamental changes are necessary. The problem, in the workplace, is that these “game” mechanisms often evolve into high-stakes performance measurements. Then there is too much anxiety for the “gameified” workplace to be fun.

In Java culture, the feedback issue is a severe problem, because development is often slow and the tools and culture tend to sterilize the development process by eliminating that “cosmic horror” (which elite programmers prefer) known as the command line. While IDEs do a great job of reducing flow-breakage that occurs for those unfortunate enough to be maintaining others’ code, they also create a world in which the engineers are alienated from computation and problem-solving. They don’t compile, build, or run code; they tweak pieces of giant systems that run far away in production and are supported by whoever drew the short straw and became “the 3:00 am guy”.

IDEs have some major benefits but some severe drawbacks. They’re good to the extent that they allow people to read code without breaking flow; they’re bad to the extent that they tend to require use patterns that break flow. The best solution, in my opinion, to the IDE problem is to have a read-only IDE served on the web. Engineers write code using a real editor, work at the command line so they are actually using a computer instead of an app, and do almost all of their work in a keyboard-driven environment. However, when they need to navigate others’ code in volume, the surfing (and possibly debugging) capabilities offered by IDEs should be available to them.

3. O(days): Progress

Flow and feedback are nice, but in the long term, programmers need to feel like they’re accomplishing something, or they’ll get bored. The feedback should show continual improvement and mastery. The day scale is the level at which programmers want to see genuine improvements. The same task shouldn’t be repeated more than a couple times: if it’s dull, automate it away. If the work environment is so constrained and slow that a programmer can’t log, on average, one meaningful accomplishment (feature added, bug fixed) per day, something is seriously wrong.  (Of course, most corporate programmers would be thrilled to fix one bug per week.)

The day-by-day level and the need for a sense of progress is where managers and engineers start to understand each other. They both want to see progress on a daily basis. So there’s a meeting point there. Unfortunately, managers have a tendency to pursue this in a counterproductive way, often inadvertently creating a Heisenberg problem (observation corrupts the observed) in their insistence on visibility into progress. I think that the increasing prevalence of Jira, for example, is dangerous, because increasing managerial oversight at a fine-grained level creates anxiety and makes flow impossible. I also think that most “agile” practices do more harm than good, and that much of the “scrum” movement is flat-out stupid. I don’t think it’s good for managers to expect detailed progress reports (a daily standup focused on blockers is probably ok) on a daily basis– that’s too much overhead and flow breakage– but this is the cycle at which engineers tend to audit themselves, and they won’t be happy in an environment where they end the day not feeling that they worked a day.

4. O(weeks): Support.

Progress is good, but as programmers, we tend toward a trait that the rest of the world sees only in the moody and blue: “depressive realism”. It’s as strong a tendency in the mentally healthy among us as the larger-than-baseline percentage who have mental health issues. For us, it’s not depressive. Managers are told every day how awesome they are by their subordinates, regardless of the fact that more than half of the managers in the world are useless. We, on the other hand, have subordinates (computers) that frequently tell us that we fucked up by giving them nonsensical instructions. “Fix this shit because I can’t compile it.” We tend to have an uncanny (by business standards) sense of our own limitations. We also know (on the topic of progress) that we’ll have good days and bad days. We’ll have weeks where we don’t accomplish anything measurable because (a) we were “blocked”, needing someone else to complete work before we could continue, (b) we had to deal with horrendous legacy code or maintenance work– massive productivity destroyers– or (c) the problem we’re trying to solve is extremely hard, or even impossible, but it took us a long time to fairly evaluate the problem and reach that conclusion.

Programmers want an environment that removes work-stopping issues, or “blockers”, and that gives them the benefit of the doubt. Engineers want the counterproductive among them to be mentored or terminated– the really bad ones just have to be let go– but they won’t show any loyalty to a manager if they perceive that he’d give them grief over a slow month. This is why so-called “performance improvement plans” (PIPs)– a bad idea in any case– are disastrous failures with engineers. Even the language is offensive, because it suggests with certainty that an observed productivity problem (and most corporate engineers have productivity problems because most corporate software environments are utterly broken and hostile to productivity) is a performance problem, and not something else. An engineer will not give one mote of loyalty to a manager that doesn’t give her the benefit of the doubt.

I choose “weeks” as the timeframe order of magnitude for this need because that’s the approximate frequency with which an engineer can be expected to encounter blockers, and removal of these is one thing that engineers often need from their managers: resolution of work-stopping issues that may require additional resources or (in rare cases) managerial intervention. However, that frequency can vary dramatically.

5. O(months): Career Development.

This is one that gets a bit sensitive, and it becomes crucial on the order of months, which is much sooner than most employers would like to see their subordinates insisting on career advancement, but, as programmers, we know we’re worth an order of magnitude more than we’re paid, and we expect to be compensated by our employers investing in our long-term career interests. This is probably the most important of the 6 items listed here.

Programmers face a job market that’s unusually meritocratic when changing jobs. Within companies, the promotion process is just as political and bizarre as it is for any other profession, but when looking for a new job, programmers are evaluated not on their past job titles and corporate associations, but on what they actually know. This is quite a good thing overall, because it means we can get promotions and raises (often having to change corporate allegiance in order to do so, but that’s a minor cost) just by learning things, but it also makes for an environment that doesn’t allow for intellectual stagnation. Yet most of the work that software engineers have to do is not very educational and, if done for too long, that sort of work leads in the wrong direction.

When programmers say about their jobs, “I’m not learning”, what they often mean is, “The work I am getting hurts my career.” Most employees in most jobs are trained to start asking for career advancement at 18 months, and to speak softly over the first 36. Most people can afford one to three years of dues paying. Programmers can’t. Programmers, if they see a project that can help their career and that is useful to the firm, expect the right to work on it right away. That rubs a lot of managers the wrong way, but it shouldn’t, because it’s a natural reaction to a career environment that requires actual skill and knowledge. In most companies, there really isn’t a required competence for leadership positions, so seniority is the deciding factor. Engineering couldn’t be more different, and the lifetime cost of two years’ dues-paying can be several hundred thousand dollars.

In software, good projects tend to beget good projects, and bad projects beget more crap work. People are quickly typecast to a level of competence based on what they’ve done, and they have a hard time upgrading, even if their level of ability is above what they’ve been assigned. People who do well on grunt work get more of it, people who do poorly get flushed out, and those who manage their performance precisely to the median can get ahead, but only if managers don’t figure out what they’re doing. As engineers, we understand the career dynamic very well, and quickly become resentful of management that isn’t taking this to heart. We’ll do an unpleasant project now and then– we understand that grungy jobs need to be done sometimes– but we expect to be compensated (promotion, visible recognition, better projects) for doing it. Most managers think they can get an undesirable project done just by threatening to fire someone if the work isn’t done, and that results in adverse selection. Good engineers leave, while bad engineers stay, suffer, and do it– but poorly.

Career-wise, the audit frequency for the best engineers is about 2 months. In most careers, people progress by putting in time, being seen, and gradually winning others’ trust, and actual skill growth is tertiary. That’s not true for us, or at least, not in the same way. We can’t afford to spend years paying dues while not learning anything. That will put us one or two full technology stacks behind the curve with respect to the outside world.

There’s a tension employees face between internal (within a company) and external (within their industry) career optimization. Paying dues is an internal optimization– it makes the people near you like you more, and therefore more likely to offer favors in the future– but confers almost no external-oriented benefit. It was worthwhile in the era of the paternalistic corporation, lifelong employment, and a huge stigma attached to changing jobs (much less getting fired) more than two or three times in one career. It makes much less sense now, so most people focus on the external game. Engineers who focus on the external objective are said to be “optimizing for learning” (or, sometimes, “stealing an education” from the boss). There are several superiorities to a focus on the external career game. First, external career advancement is not zero-sum–while jockeying internally for scarce leadership positions is– and what we do is innately cooperative. It works better with the type of people we are. Second, our average job tenure is about 2 to 3 years. Third, people who suffer and pay dues are usually passed over anyway in favor of more skilled candidates from outside. Our industry has figured out that it needs skilled people more than it needs reliable dues-payers (and it’s probably right). So this explains, in my view, why software engineers are so aggressive and insistent when it comes to the tendency to optimize for learning.

There is a solution for this, and although it seems radical, I’m convinced that it’s the only thing that actually works: open allocation. If programmers are allowed to choose the projects best suited to their skills and aspirations, the deep conflict of interest that otherwise exists among their work, their careers, and their educational aspirations will disappear.

6. O(years): Macroscopic goals. On the timescale of years, macroscopic goals become important. Money and networking opportunities are major concerns here. So are artistic and business visions. Some engineers want to create the world’s best video game, solve hard mathematical problems, or improve the technological ecosystem. Others want to retire early or build a network that will set them up for life.

Many startups focus on “change the world” macroscopic pitches about how their product will connect people, “disrupt” a hated industry or democratize a utility, or achieve some world-changing ambition. This makes great marketing copy for recruiters, but it doesn’t motivate people on the day-to-day basis. On the year-by-year basis, none of that marketing matters, because people will actually know the character of the organization after that much time. That said, the actual macroscopic character, and the meaning of the work, of a business matters a great deal. Over years and decades, it determines whether people will stick around once they develop the credibility, connections, and resources that would give them the ability to move on to something else more lucrative, more interesting, or of higher status.

How to win

It’s conventional wisdom in software that hiring the best engineers is an arbitrage, because they’re 10 times as effective but only twice as costly. This is only true if they’re motivated, and also if they’re put to work that unlocks their talent. If you assign a great engineer to mediocre work, you’re going to lose money. Software companies put an enormous amount of effort into “collecting” talent, but do a shoddy job of using or keeping it. Often, this is justified in the context of a “tough culture”; turnover is reflective of failing employees, not a bad culture. In the long term, this is ruinous. The payoff in retaining top talent is often exponential as a function of the time and effort put into attracting, retaining, and improving it.

Now that I’ve discussed what engineers need from their work environments in order to remain motivated, the next question is what a company should do. There isn’t a one-size-fits-all managerial solution to this. In most cases, the general best move is to reduce managerial control and to empower engineers: to set up an open-allocation work environment in which technical choices and project direction are set by engineers, and to direct from leadership rather than mere authority. This may seem “radical” in contrast to the typical corporate environment, but it’s the only thing that works.

The Great Discouragement, and how to escape it.

I’ve recently taken an interest in the concept of the technological “Singularity”, referring to the acceleration of economic growth and social change brought along by escalating technological growth, and the potential for extreme growth (thousands of times faster than what exists now) in the future. People sometimes use “exponential” to refer to fast growth, but the reality is that (a) exponential curves do not always grow fast, and (b) economic growth has actually been faster than exponential to this point.

Life is estimated to estimated to be nearly 4 billion years old, but sexual reproduction and multicellular life are only about a billion years old. In other words, for most of its time in existence, life was relatively primitive, and growth itself was slow. Organisms themselves could reproduce quickly, but they died just as fast, and the overall change was minimal. This was true until the Cambrian Explosion, about 530 million years ago, when it accelerated. Evolution has been speeding up over time. If we represent “growth” in terms such as energy capture, energy efficiency, and neural complexity, we see that biological evolution has a faster-than-exponential “hockey stick” growth pattern. Growth was very slow for a long time, then the rate sped up.

One might model pre-Cambrian life’s growth rate at below 0.0000001% (note: these numbers are all estimates) per year, but by the age of animals it was closer to 0.000001% per year, or a doubling (of neural sophistication) every 70 million years or so, and several times faster than that in the primate era. Late in the age of animals, creatures such as birds and mammals could adapt rapidly, taking appreciably different forms in a mere few hundred thousand years. With the advent of tools and especially language (which had effects on assortative mating, and created culture) the growth rate, now factoring in culture and organization as well as evolutionary changes, skyrocketed to a blazing 0.00001% per year, in the age of hominids. Then came modern humans.

Data on the economic growth of human society paint a similar picture: accelerating exponential growth. Neolithic humans plodded along at about 0.0004% per year (still an order of magnitude faster than evolutionary change) and with the emergence of agriculture around 10000 B.C.E., that rate spend up, again, to 0.006% per year. This fostered the growth of urban, literate civilization (around 3000 B.C.E) and that boosted the growth rate to a whopping 0.1% per year, which was the prevailing economic growth rate for the world up until the Renaissance (1400 C.E.).

This level of growth– a doubling every 700 years– is rapid by the standards of most of the Earth’s history. It’s so obscenely fast that many animal and plant species have, unfortunately, been unable to adapt. They’re gone forever, and there’s a credible risk that we do ourselves in as well (although I find that unlikely). Agricultural humans increased their range by miles per year and increased the earth’s carrying capacity by orders of magnitude. Despite this progress, such a rate would be invisible to the people living in this 4,400-year span. No one had the global picture, and human lives aren’t long enough for anyone to have seen the underlying trend of progress, as opposed to the much more severe, local ups and downs. Tribes wiped each other out. Empires rose and fell. Religions were born, died, and were forgotten. Civilizations that grew too fast faced enemies (such as China, which likely would have undergone the Industrial Revolution in the 13th century had it not been susceptible to Mongol invasions). Finally, economic growth that occurred in this era was often absorbed entirely (and then some) by population growth. A convincing case can be made that the average person’s quality of life changed very little from 10000 B.C.E. to 1800 C.E., when economic growth began (for the first time) to outpace population growth.

In the 15th to 17th centuries, growth accelerated to about 0.3 percent per year: triple the baseline agricultural rate. In the 18th century, with the early stages of the Industrial Revolution, the Age of Reason, and the advent of rational government (as observed in the American experiment and French Revolution) it was 0.8 percent per year. By this point, progress was visible. Whether this advancement is desirable has never been without controversy, but by the 18th century, that it was occurring was without question. At that rate of progress, one would see a doubling of the gross world product in a long human life.

Even Malthus, the archetypical futurist pessimist, observed progress in 1798, but he made the mistake of assuming agrarian productivity to be a linear function of time, while correctly observing population growth to be exponential. In fact, economic growth has always been exponential: it was just a very slow (at that time, about 1% per year) exponential function that looked linear. On the other hand, his insight– that population growth would outpace food production capacity, leading to disaster– would have been correct, had the Industrial Revolution (then in its infancy) not accelerated. (Malthusian catastrophes are very common in history.) The gross world product increased more than six-fold in the 19th century, rising at a rate of 1.8 percent per year. Over the 20th, it continued to accelerate, with economic growth at its highest in the 1960s, at 5.7 percent per year– or a doubling every 150 months. We’re now a society that describes lower-than-average but positive growth as a “recession”.

In that sense, we’re also “in decline”. We’ve stopped growing at anything near our 1960s peak rate. We’re now plodding along at about 4.2 percent per year, if the last three decades are any indication. Most countries in the developed world would be happy to grow at half that rate.

The above numbers, and the rapid increase in the growth rate itself, describe the data behind the concept of “The Singularity”. Exponential growth emerges as a consequence of the differential equation, dy/dx = a * y, whose solution is an exponential function. Logistic growth is derived from the related equation dy/dx = a * y * (1 – y/L), where L is an upper limit or “carrying capacity”. Such limitations always exist, but I think that, with regard to economic growth, that limit is very far away– far enough away that we can ignore it for now. However, what we’ve observed is much faster than exponential growth, since the growth rate itself seems to be accelerating (also at a faster than exponential rate). So what is the correct way to model it?

One class of models for such a phenomenon is derived from the differential equation, dy/dx = a*y^(1+b), where b > 0. The solution to this differential equation (power law) is of the form y = C/(D-t)^(-1/b), the result of which is that as t -> D, growth becomes infinite. Hence, the name “Singularity”. No one actually believes that economic progress will become literally infinite, but that is a point at which it is assumed we will land comfortably in a post-scarcity, indefinite-lifespan existence. These two concepts are intimately connected and I would consider them identical. Time is the only scarce element in the life of a person middle-class or higher, but extremely so as long as our lifespans are so short compared to the complexity of the modern world (a person only gets to have one or two careers). Additionally, if people live “forever” (by which I mean millions of years, if they wish) then there will be an easy response to not being able to afford something: wait until you can. There will still be differences in status among post-scarcity people (some being at the end of a five-year waiting list for lunar tourism, and with the richest paying a premium for the prestige of having human servants) and probably some people will care deeply about them, but on the whole, I think these differences will be trivial and people will (over time) develop an immunity to the emotional problems of extreme abundance.

I should note that there are also dystopian Singularity possibilities, such as in The Matrix, in which machines become sentient and overthrow humans. I find this extremely far-fetched, because most artificial intelligence (to date) is still human intelligence applied to difficult statistical problems. We use machines to do things that we’re bad at, like multiply huge matrices in fractions of a second, and analyze game trees at 40-ply depth. I don’t see machines becoming “like us” because we’ll never have a need for them to be so. We’ll replicate functionality we want in order to solve menial tasks (with an increasingly sophisticated category of tasks being considered “menial”) but we won’t replicate the difficult behaviors and needs of humans. I don’t think we’ll fall into the trap of creating a “strong AI” that overthrows us. Sad to say it, but we’ve been quite skilled, over the millennia, at dehumanizing humans (slavery) in the attempt to make ideal workers. The upshot of this is that we’re unlikely to go to the other extreme and attempt to humanize machines. We’ll make them extremely good at performing our grunt work and leave the “human” stuff to ourselves.

Also, I don’t think a “Singularity” (in the sense of infinite growth) is likely, because I don’t think the model that produces a singularity is correct. I think that economic and technical growth are accelerating, and that we may see a post-scarcity, age-less world as early as 2100. That said, the data show deceleration over the past 50 years (from 5-6 percent to 3-4 percent annual growth) so rather than rocketing toward such a world, we seem to be coasting. I would be willing to call the past 40 years, in the developed world, an era of malaise and cultural decline. It’s the Great Discouragement, culminating a decade (2000s) of severe sociological contraction despite economic growth in the middle years, ending with a nightmare recession. What’s going on?

Roughly speaking, I think we can examine, and classify, historical periods by their growth rate, like so:

  • Evolutionary (below 0.0001% per year): 3.6 billion to 1 million BCE. Modern humans not yet on the scene.
  • Pre-Holocene (0.0001% to 0.01% per year): 1 million to 10,000 BCE.
  • Agrarian (0.01 to 1.0% per year): 10,000 BCE to 1800 CE. Most of written human history occurred during this time. Growth was slower than population increase, hence frequent Malthusian conflict. Most labor was coerced.
  • Industrial (1.0 to 10.0% per year): 1800 CE to Present. Following the advent of rational government, increasing scientific literacy, and the curtailment of religious authority, production processes could be measured and improved at rapid rates. Coercive slavery was replaced by semi-coercive wage labor.
  • Technological (10.0 to 100.0+% per year): Future. This rate of growth hasn’t been observed in the world economy as a whole, ever, but we’re seeing it in technology already (Moore’s Law, cost of genome sequencing, data growth, scientific advances). We’re coming into a time where things that were once the domain of wizardry (read: impossible) such as reading other peoples’ dreams can now be done. In the technological world, labor will be non-coercive, because the labor of highly motivated people is going to be worth 10 to 100 times more than that of poorly motivated people.

Each of these ages has a certain mentality that prospers in it, and that characterizes successful leadership in such a time. In the agrarian era, the world was approximately zero-sum, and the only way for a person to become rich was to enslave others and capture their labor, or kill them and take their resources. In the early industrial era, growth became real, but not fast enough to accommodate peoples’ material ambitions, creating a sense of continuing necessity for hierarchy, intimidation, and injustice in the working world. In a truly technological era (which we have not yet entered) the work will be so meaningful and rewarding (materially and subjectively) that such control structures won’t be necessary.

In essence, these economic eras diverge radically in their attitudes toward work. Agrarian-era leaders, if they wanted to be rich, could only do so by controlling more people. Kings and warlords were assessed on the size of their armies, chattel, and harems. Industrial-era leaders focused on improving mechanical processes and gaining control of capital. They ended slavery in favor of a freer arrangement, and workplace conditions improved somewhat, but were still coarse. Technological-era leadership doesn’t exist yet, in most of the world, but its focus seems to be on the deployment of human creativity to solve novel problems. In the technological world, a motivated and happy worker isn’t 25 or 50 percent more productive than an average one, but 10 times as effective. As one era evolves into the next, the leadership of the old one proves extremely ineffective.

The clergy and kings of antiquity were quite effective rulers in a world where almost no one could afford books, land was the most important form of wealth, and people needed a literate, historically-aware authority to direct them over what to do with it. Those in authority had a deep understanding of the limitations of the world and the slow rate of human progress: much slower than population growth. They knew that life was pretty close to a zero-sum struggle, and much of religion focuses on humanity’s attempts to come to terms with such a nasty reality. These leaders also knew, in a macabre way, how to handle such a world: control reproduction, gain dominion over land through force, use religion to influence the culture and justify land “ownership”, and curtail population growth in small-scale massacres called “wars” instead of suffering famines or revolutions.

People like Johannes Gutenberg, Martin Luther, John Locke, Adam Smith, and Voltaire came late in the agrarian era changed all that. Books became affordable to middle-class Europeans, and the Reformation happened a couple centuries later. This culminated in the philosophical movement known as The Enlightenment, in which Europe and North America disavowed rule based on “divine right” or heredity and began applying principles of science and philosophy to all areas of life. By 1750, there was a world in which the clerics and landlords of the agrarian era were terrible leaders. They didn’t know the first thing about the industrial world that was appearing right in front of them. Over the next couple hundred years, they were either violently overthrown (as in France) or allowed to decline gracefully out of influence (as in England).

The best political, economic, and scientific minds in that time could see a world that grew at industrial rates that were unheard of until that time. The landowning dinosaurs from the agrarian era died out or lost power. This was not always an attractive picture, of course. One of the foremost conflicts between an industrial and an agrarian society was the American Civil War, an extremely traumatic conflict for both sides. Then there were the nightmarish World Wars of the early 20th century, which established that industrial societies can still be immensely barbaric. That said, the mentalities underlying these wars were not novel, and it wasn’t the industrial era that caused them, so much as it was a case of pre-industrial mentalities combining with industrial power, to very dangerous results.

For example, before Nazism inflamed it, racism in Germany was (although hideous) not unusual by European or world standards, then or at any point up to then. In fact, it was a normal attitude in England, the United States, Japan, and probably all of the other nation-states that were forming around that time. Racism, although I would argue it to be objectively immoral in any era, was a natural byproduct of a world whose leaders saw it necessary, for millennia, to justify dispossession, enslavement, and massacre of strangers. What the 1940s taught us, in an extreme way, is that this hangover from pre-industrial humanity, an execrable pocket of non-Reason that had persisted into industrial time, could not be accepted.

The First Enlightenment began when leading philosophers and statesmen realized that industrial rates of growth were possible in a still mostly agrarian world, and they began to work toward the sort of world in which science and reason could reign. Now we have an industrial economy, but our world is still philosophically, culturally and rationally illiterate, even in the leading ranks. Still, we live on the beginning fringe of what might be (although it is too early to tell) a “Second Enlightenment”. We now have an increasing number of technological thinkers in science and academia. We see such thinking on forums like Hacker News, Quora, and some corners of Reddit. It’s “nerd culture”. However, by and large, the world is still run by industrial minds (and the mentality underlying American religious conservatism is distinctly pre-industrial). This is the malaise that top computer programmers face in their day jobs. They have the talent and inclination to work to turn $1.00 into $2.00 on difficult, “sexy” problems (such as machine learning, bioinformatics, and the sociological problems solved by many startups) but they work for companies and managers that have spent decades perfecting the boring, reliable processes that turn $1.00 into $1.04, and I would guess that this is the kind of work with which 90% of our best technical minds are engaged: boring business bullshit instead of the high-potential R&D work that can actually change the world. The corporate world still thinks in industrial (not technological) terms, and it always will. It’s an industrial-era institution, as much as baronies and totalitarian religion are agrarian-era beasts.

Modern “nerd culture” began in the late 1940s when the U.S. government and various corporations began funding basic research and ambitious engineering and scientific projects. This produced immense prosperity, rapid growth, and an era of optimism and peace. It enabled us to land a man on the moon in 1969. (We haven’t been back since 1972.) It built Silicon Valley. It looked like the transition from industrial to technological society (with 10+ percent annual economic growth) was underway. An American in 1969 might have perceived that the Second Enlightenment was underway, with the Civil Rights Act, enormous amounts of government funding for scientific research, and a society whose leaders were, by and large, focused on ending poverty.

Then… something happened. We forgot where we came from. We took the great infrastructure that a previous generation had build for granted, and let it decay. As the memory of the Gilded Age (brought to us by a parasitic elite) and Great Depression faded, elitism became sexy again. Woodstock, Civil Rights, NASA and “the rising tide that lifts all boats” gave way to Studio 54 and the Reagan Era. Basic research was cut for its lack of short-term profit, and because the “take charge” executives (read: demented simians) that raided their companies couldn’t understand what those people did all day. (They talk about math over their two-hour lunches? They can’t be doing anything important! Fire ‘em all!) Academia melted down entirely, with tenure-track jobs becoming very scarce. America lost its collective vision entirely. The 2001 vision of flying cars and robot maids for all was replaced with a shallow and nihilistic individual vision: get as rich as you can, so you have a goddamn lifeboat when this place burns the fuck down.

The United States entered the post-war era as an industrial leader. It rebuilt Europe and Japan after the war, lifted millions out of poverty, made a concerted (if still woefully incomplete) effort to end its own racism, and had enormous technical accomplishments. Yet now it’s in a disgraceful state, with people dying of preventable illnesses because they lack health insurance, and business innovation stagnant except in a few “star cities” with enormous costs of living, where the only thing that can get funded are curious but inconsequential sociological experiments. Funding for basic research has collapsed, and the political environment has veered to the far right wing. Barack Obama– who clearly has a Second Enlightenment era mind, if a conservative one in such a frame– has done an admirable job of fighting this trend (and he’s accomplished far more than his detractors, on the left and right, give him credit for) but one man alone cannot hold back the waterfall. The 2008 recession may have been the nadir of the Great Discouragement, or the trough may still be ahead of us. Right now, it’s too early to tell. We’re clearly not out of the mess, however.

How do we escape the Great Discouragement? To put it simply, we need different leadership. If the titans of our world and our time are people who can do no better than to turn $1.00 into $1.04, then we can’t expect more of them. If we let such people dominate our politics, then we’ll have a mediocre world. This is why we need the Second Enlightenment. The First brought us the idea of rational government: authority coming from laws and structure rather than charismatic personalities, heredity, or religious claims. In the developed world, it worked! We don’t have an oppressive government in the United States. (We may have an inefficient one, and we have some very irrational politicians, but the system is shockingly robust when one considers the kinds of charismatic morons who are voted into power on a fairly regular basis.) To the extent that the U.S. government is failing, it’s because the system has been corrupted by the unchecked corporate power that has stepped into the power vacuum created by a limited, libertarian government. Solving the nation’s economic and sociological problems, and the cultural residue associated with a lack of available, affordable education, will take us a long way toward fixing the political issues we have.

The Second Enlightenment will focus on a rational economy and a fair society. We need to apply scientific thought and philosophy to these domains, just as we did for politics in the 1700s when we got rid of our kings and vicars. I don’t know what the solution will end up looking like. Neither pure socialism nor pure capitalism will do: the “right answer” is very likely to be a hybrid of the two. It is clear to me, to some extent, what conditions this achievement will require. We’ll have to eliminate the effects of inherited wealth, accumulated social connection, and the extreme and bizarre tyranny of geography in determining a person’s economic fortune. We’ll have to dismantle the current corporate elite outright; no question on that one. Industrial corporations will still exist, just as agrarian institutions do, but the obscene power held by these well-connected bureaucrats, whose jobs involve no production, will have to disappear. Just as we ended the concept of a king’s”divine right” to rule, turning such people into mere figureheads, we’ll have to do the same with corporate “executives” and their similarly baseless claims to leadership.

We had the right ideas in the Age of Reason, and the victories from that time benefit us to that day, but we have to keep fighting to keep the lights on. If we begin to work at this, we might see post-scarcity humanity in a few generations. If we don’t, we risk driving headlong into another dark age.

Competing to excel vs. competing to suffer

One of the more emotionally charged concepts in our society is competition. Even the word evokes strong feelings, some positive and others adverse. Sometimes, the association is of an impressive athletic or intellectual feat encouraged by a contest. For others, the image is one of congestion, scarcity, and degeneracy. The question I intend to examine is: Is competition good or bad? (The obvious answer is, “it depends.” So the real question is, “on what?”)

In economics, competition is regarded as an absolute necessity, and any Time Warner Cable customer will attest to the evils of monopolies: poor service, high costs, and an overall dismal situation that seems unlikely to improve. (I would argue that a monopoly situation has competition: between the sole supplier and the rest of the world. Ending the monopoly doesn’t “add” competition, but makes more fair the competition that already exists intrinsically.) Competition between firms is generally seen as better than any alternative. Competition within firms is generally regarded as corrosive, although this viewpoint isn’t without controversy.

It’s easy to find evidence that competition can be incredibly destructive. Moreover, competition in the general sense is inevitable. In a “non-competitive” business arrangement such as a monopoly or monopsony, competition is very much in force: just a very unfair variety of it. Is competition ever, however, intrinsically good? To answer this, it’s important to examine two drastically different kinds of competition: competing to excel, and competing to suffer.

Competition to excel is about doing something extremely well: possibly better than it has ever been done before. It’s not about beating the other guy. It’s about performing so well that very few people can reach that level. Was Shakespeare motivated by being better than a specific rival, or doing his own thing? Almost certainly, it was the latter. This style of competition can focus people toward goals that they might otherwise not see. When it exists, it can be a powerful motivator.

In a competition to excel, people describe the emotional frame as “competing against oneself” and enter a state comparable to a long-term analogue of flow. Any rivalries become tertiary concerns. This doesn’t mean that people in competitions to excel never care about relative performance. Everyone would rather be first place than second, so they do care about relative standing, even if absolute performance is given more weight. However, in a competition to excel, you’d rarely see someone take an action that deliberately harms other players. That would be bad sportsmanship: so far outside the spirit of the game that most would consider it cheating.

Competition to suffer is about absorbing more pain and making more sacrifices, or creating the appearance of superior sacrifice. It’s about being the last person to leave the office, even when there isn’t meaningful work left to do. It’s about taking on gnarly tasks with a wider smile on one’s face than the other guy. These contests become senseless wars of attrition. In the working world, sacrifice-oriented competitions tend to encourage an enormous amount of cheating, because people can’t realistically absorb that much pain and still perform at a decent level at basic tasks. With very few exceptions, these contests encourage a lot of bad behavior and are horrible for society.

What’s most common in the corporate world? In most companies, the people who advance are the ones who (a) visibly participate in shared suffering, (b) accept subordination the most easily, and (c) retain an acceptable performance under the highest load (rather than those who perform best under a humane load.) People are measured, in most work environments, based on their decline curves (taken as a proxy for reliability) rather than their capability. So the corporate ladder is, for the most part, a suffering-oriented competition, not an excellence-oriented one. People wonder why we get so few creative, moral, or innovative people in the upper ranks of large corporations. This is why. The selection process is biased against them.

People who do well in one style of competition tend to perform poorly in the other, and the salient trait is context-sensitivity. Highly context-sensitive people, whose performance is strongly correlated with interest in their work, lack of managerial detriment, and overall health, tend to be the most creative and capable of hitting high notes: they win at excellence-oriented contests, but they fail in the long, pointless slogs of corporate suffering contests. People with low context-sensitivity tend to be the last ones standing in suffering-oriented competitions, but they fail when excellence is required. Corporations are configured in such a way that they load up on the latter type in the upper ranks. Highly context-sensitive, creative people are decried as “not a team player” when their motivation drops (as if it were a conscious choice, when it’s probably a neurological effect) due to an environmental malady.

Suffering-oriented competitions focus on reliability and appearance: how attractively a person can do easy, stupid things. John’s TPS reports are as good as anyone else’s, but he has to be reminded to use the new cover sheet. Tom’s does his TPS reports with a smile on his face. Tom gets the promotion. Excellence-oriented competitions have much higher potential payoff. In the workplace, excellence-oriented environments have an R&D flavor: highly autonomous and implicitly trusted workers, and a long-term focus. After the short-sighted and mean-spirited cost-cutting of the past few decades, much of which has targeted R&D departments, there isn’t much excellence-oriented work left in the corporate world.

As businesses become risk-averse, they grow to favor reliability and conformity over creativity and excellence, which are intermittent (and therefore riskier) in nature. Suffering-oriented competitions dominate. Is this good for these companies? I don’t think so. Even in software, the most innovative sector right now, companies struggle so much at nurturing internal creativity that they feel forced to “acq-hire” mediocre startups at exorbitant prices in order to compensate for their own defective internal environments.

The other problem with suffering-oriented competitions is that it’s much easier to cheat, and antisocial behavior is more common. Excellence can’t be faked, and the best players inspire the others. People are encouraged to learn from the superior players, rather than trying to destroy them. In sacrifice-oriented competitions (in the corporate world, usually centered on perceived effort and conformity) the game frequently devolves into an arrangement where people spend more time trying to trip each other up, and not be tripped, than actually working.

Related to this topic, one of the more interesting financial theories is the Efficient Market Hypothesis. It’s not, of course, literally true. (Arbitrage is quite possible, for those with the computational resources.) It is, however, very close to being true. It provides reliable, excellent approximations of relationships between tradeable securities. At it’s heart, though, EMH isn’t about financial markets. It’s about competition itself, and who the prime movers are in a contest. Fair prices do not require all (possibly competing) parties to be maximally informed about the security being traded (since that’s obviously not the case). One well-informed participant (a market-maker) with enough liquidity is often enough to set prices at the fair level based on current knowledge. Competitions, in other words, tend to be dominated by a small set of players who are the “prime movers”. By whom? Excellence-oriented competitions are dominated by the best: the most skilled, capable, talented or energetic. Suffering-oriented competitions tend to be dominated by the stupidest, and by “stupidest”, I mean something that has nothing to do with intelligence but, rather, “willing to take the shittiest deal”.

That said, in the real world, the Gervais Principle applies. The stupidest (the Clueless) are a force, but those who can manipulate the competition from outside (i.e. cheat) tend to be the actual winners. They are the Sociopaths. The sociopaths shift blame, take credit, and seem to be the most reliable, best corporate citizens by holding up (socially and intellectually) under immense strain. The reality is that they aren’t suffering at all. Even if they can’t get a managerial role (and they usually can) they will find some way to delegate that shit. They win suffering-oriented competitions by externalizing the suffering, and remain socially pleasant and well-slept enough to take the rewards. So suffering-oriented competitions, if cheating is possible, aren’t really dominated by the stupidest, so much as by the slimiest.

Intuitively, people understand the difference between excellence- and suffering-oriented competition. Consider the controversy associated with doping in sports. These performance-enhancing drugs have horrific, long-term side effects. They take what would otherwise be a quintessential excellence-oriented competition and inject an element of inappropriate sacrifice: willingness to endure long-term health risks. The agent (performance-enhancing drugs) that turns an excellence competition into a sacrifice-oriented one must be disallowed. People have an emotional intuition that it’s cheating to use such a thing. Athletes are discouraged from destroying their long-term health in order to improve short-term performance, with the knowledge that allowing this behavior will require most or all of the top contestants to follow suit. But in the corporate world, no such ethics exist. Even six hours of physical inactivity (sitting at a desk) is bad for a person’s long-term health, but that’s remarkably common, and the use of performance-enhancing drugs that would not be required outside of the office context (such benzodiazepine and stimulant overuse to compensate for the unhealthy environment, after-hours “social” drinking) is widespread.

Why does corporate life so quickly devolve into competition to suffer? In truth, companies benefit little from these contests. Excellence has higher expected value than suffering. The issue is that companies don’t allow people to excel. It’s not that they flat-out forbid it, but that almost no one in a modern, so-called “lean” corporate environment has the autonomy that would make excellence even possible. R&D has been closed down, in most companies, for good. That leaves suffering as the single axis of competition. I think most people reading this know what kind of world we get out of this, and can see why it’s not acceptable.

The rent really is too damn high, and the surprising cause.

A thought experiment.

It’s Monday morning, and on the drive to work, you realize that you’re low on fuel. At the gas station, you discover that gasoline (petrol) now costs $40 per gallon ($10.56 per liter). That seems bizarre and expensive, so you don’t buy any. You pass that station and find another one, but you find the same high prices. Curious about what’s happening and where you might get a better deal, you call people in other cities and find out that the price of gasoline has gone up enormously overnight, but no one seems to know why. Gas below $35 per gallon cannot be found. Assuming that we’re not in the midst of hyperinflation, which of the following three possibilities seems most likely?

  1. Income hypothesis: people suddenly have a lot more money. This isn’t inflation or a price hike, but a result of an increase in genuine wealth. The economy has grown several thousand percent overnight.
  2. Value hypothesis: the real value of gasoline has improved, presumably due to a new technology that can extract substantially more genuine utility out of the fuel. Perhaps 300-mile-per-gallon flying cars have just come online and people are now taking international trips.
  3. Calamity hypothesis: something bad has happened that has either constricted the supply of gasoline or created a desperate need to consume it, increasing demand. The sudden price hike is a consequence of this.

Most people would choose the third option, and they’d probably be right. Note that, from a broad-based aggregate perspective, the first two options represent “good” possibilities– price increases resulting from desirable circumstances– while the third of these is genuinely bad for society. I’ll get back to that in a second.

For gasoline, people understand this. From a stereotypical American perspective, high gasoline prices are “bad” and represent “the bad guys” (greedy oil CEOs, OPEC) winning. Even moderates like me who believe gas prices should be raised (through taxation) to account for externalized environmental costs are depicted as “radical environmentalists”. People tend to emotionalize and moralize prices: stock prices are “the good guys”– those “short sellers”, in this mythology, are just really bad people– gas prices are “the bad guys”, a strong dollar is God’s will made economics, and no one seems to care too strongly about the other stuff (bonds, metals, interest rates).

I don’t think that any of these exchange-traded commodities deserve a “good guys”/”bad guys” attitude, because investors are free to take positions as they wish. If they believe that oil is “too expensive”, they can buy futures. If they think a publically-traded company is making “too much profit”, they can buy shares. (The evil of corporate America isn’t that companies make too much profit. It’s that the profits they do make often involve externalizing costs, and that they funnel money that should be profit, or put into higher wages, to well-connected non-producing parasites called “executives”.)

There is one price dynamic that has a strong moral component (in terms of its severe effect on peoples’ lives and the environment) and it’s also one where most Americans get “the good guys” wrong. It’s the price of housing. Housing prices represent “the good guys” in the popular econo-mythology, because Americans reflexively associate homeownership with future-orientation and virtue, but I disagree with this stance on the price of housing. People mistakenly believe that the average American homeowner is long on housing (i.e. he benefits if housing prices go up). Wrong. The resale value of his house is improving, but so is the cost of the housing (rental or another purchase) that he will have to pay if he were ever to sell it. A person’s real position on housing is how much housing the person owns, minus the amount that person must consume.  In reality, renters are (often involuntarily) in a short position on housing, resident homeowners are flat (i.e. neutral) but become short if they need to buy more housing (e.g. after having children) in the future. In reality, even most single home owners would benefit more from low housing prices and rents than high housing cost levels, because they incur more property taxes and transaction costs. The few who benefit when real estate becomes more expensive are those who least need the help. In sum, the world is net short on real estate prices and rents. If nothing else changes, it’s bad when rents and house prices go up.

If houses become more expensive because of corresponding increases in value, or because incomes increase, the increased cost of housing is an undesirable side effect of an otherwise good thing, and the net change for society is probably positive. These correspond to the first two hypotheses (income and value) that I proposed in the gas-price scenario. I think that these hypotheses explain much of why Americans tend to assume, reflexively, that rising house prices in a given location are a good thing: the assumption is that the location actually is improving, or that the economy is strong (it must be, if people can afford those prices). Rarely is the argument made that the root cause might be undesirable in nature.

An unusual position.

Here’s where I depart from convention. Most people who are well-informed know that high housing costs are an objectively bad thing: the rent really is too damn high. Few would argue, however, that the high rents in Manhattan and Silicon Valley are caused by something intrinsically undesirable, and I will. Most attribute the high prices to the desirability of the locations, or to the high incomes there. Those play a role, but not much of one– not enough to justify costs that are 3 to 10 times baseline. (The median income in Manhattan is not 10 times the national average.) I think that high housing costs. relative to income, almost invariably stem from a bad cause and, at the end of this exposition, I’ll name it. Before that, let’s focus on the recent housing problems observed in New York and Silicon Valley, where rents can easily consume a quarter to a half of a working professional’s income, and buying a family-ready apartment or house is (except for the very wealthy) outright unaffordable. Incomes for working people in these regions are high, but not nearly high enough to justify the prices. Additionally, Manhattan housing costs are rising even in spite of the slow demise of Wall Street bonuses. So the income hypothesis is out. I doubt the value hypothesis as well, because although Manhattan and San Francisco are desirable places to live and will always a command a premium for that, “desirability” factors (other than proximity to income) just don’t change with enough speed or magnitude to justify what happens on the real estate market. For example, the claim that California’s high real estate prices are caused by a “weather premium” is absurd; California had the same weather 20 years ago, when prices were reasonable. Desirability factors are invented to justify price moves observed on the market (“people must really want to live here because <X>”) but are very rarely the actual movers, because objectively desirable places to live (ignoring the difficulty of securing proximity to income, which is what keeps urban real estate expensive) are not uncommon. So what is causing the rents, and the prices, to rise?

Price inelasticity.

Consider the gasoline example. Is $40 per gallon gasoline plausible? Yes. It’s unlikely, but with the right conditions, that could easily happen. Gasoline is an inelastic good, which means that small changes in the available supply will cause large movements in price. Most Americans, if they were asked what the price effect of a 2-percent drop in gasoline (or petroleum) supply would be, would expect a commensurate (2%) increase– possibly 5 or 10 percent on account of “price gouging”. That’s wrong. The price could double. The inelasticity associated with petroleum and related products was observed directly in the 1970s “oil shocks”. 

Necessities (such as medicines, addictive drugs, housing, fuel, and food) exhibit this inelasticity. What price inelasticity means is that reducing (or in the case of real estate, damaging) the quantity supplied increases the aggregate price of the total stock, usually at the expense of society. For a model example, consider a city with 1 million housing units for which the market value of living there (as reflected in the rent) is $1,000 per month. If we value each unit at 150 times annual rent, the total real estate value is $150 billion, and we’d expect prices to be around that number. Now, let’s assume that 50,000 units of housing (5 percent) are destroyed in a catastrophe (crime epidemic, rapid urban decay, a natural disaster). What happens? A few people will leave the city, but most won’t, because their jobs will still be there. Many will stay, and will be on the market for a smaller supply of housing. Rents will increase, and given the inelasticity of housing, a likely number is $2,000 per month. Now each housing unit is worth $300,000, and with 950,000 of them, the total real estate value of the city is $285 billion. Did the city magically become a better place to live? Far from it. It became a worse place to live, but real estate became more expensive.

Inelasticity and tight supply explain the “how” of sudden price run-ups. As I’ve alluded, there are three explanations for increases in urban real estate values applying to locations such as New York and Silicon Valley:

1. Income hypothesis: prices are increasing because incomes have improved. Analysis: not sufficient. Over the past 20 years, rents and prices in the leading cities have gone up much faster than income has increased.

2. Value hypothesis: prices are increasing because the value of the real estate (independent of local income changes, discussed above) has gone up. Analysis: very unlikely. Value hypotheses can explain very local variations (new amenities, changes in views, transportation infrastructure, school district changes) but they very rarely apply to large cities– except in the context of a city’s job market (which is a subcase of the income hypothesis). Value hypotheses do not explain the broad-based increase in San Francisco or New York real estate costs.

3. Calamity hypothesis: something bad is happening that is causing the scarcity and thereby driving up the price. Analysis: this is the only remaining hypothesis. So what is happening?

Examining Manhattan, there are several factors that constrict supply. There’s regulatory corruption pushed through by entrenched owners who have nothing better to do than to fight new development (who therefore keep some of the most desirable neighborhoods capped at a ridiculous five-story height limit). Government irresponsibly allows foreign speculators into the market, when New York is long past needing a “six months and one day” law (that makes it illegal or own substantial property without using it for at least 50.1% of the year, thus preventing Third World despots, useless princes, and oil barons from driving up prices). Then there’s the seven-decade old legacy “rent control” system that ties about 1 percent of apartments up (in the context of price inelasticity, a 1-percent supply compromise is a big deal) while allowing people for whom the program was not intended (most present-day rent-control beneficiaries are well-connected, upper-middle-class people, not the working-class New Yorkers the program was intended for) to remain locked in to an arrangement strictly superior than ownership (1947 rents are a lot lower than 2012 maintenance fees). Finally, there are the parentally-funded douchebags working in downright silly unpaid internships while their parents drop $5,000 per month on their spoiled, underachieving kids’ “right” to the “New York experience”. All of these influences are socially negative and compromise supply, and thereby drive up the price of living in New York, but I don’t think any of these represent the main calamity.

My inelasticity example illustrated a case where a city can become a less desirable place to live (due to a catastrophe that destroys housing) but experience a massive increase in aggregate market value of its real estate, and I’ve shown several examples of this dynamic in the context of New York real estate. Trust-fund hipster losers and alimony socialites who’ve gamed their way into rent control don’t make the city better, but they drive up rents. In these cases and in my model example, value was strictly destroyed, but prices rose and “market value” increased. History shows us that this is the norm with real estate problems. Catastrophes reduce real estate prices if they have income effects, but a catastrophe that leaves the local job market (i.e. the sources of income) intact and doesn’t inflict a broad-based value reduction will increase the market price of real estate, both for renters and buyers. One recent example of this is the what Manhattan’s real estate industry knows as “The 9/11 Boom” in the 2000s. The short-term effect of the terrorist attack reduced real estate prices slightly, but its long-term effect was to increase them substantially over the following years. The intrinsic value of living in Manhattan decreased in the wake of the attack, but speculators (especially outside of New York and the U.S.) anticipated future “supply destruction” (read: more attacks) and bought in order to take advantage of such an occasion should it arise.

However, the post-9/11 real estate boom is over in New York, and wouldn’t affect Silicon Valley, Seattle, or Los Angeles at all. There has to be another cause of this thing. So what is it?

The real cause.

I’m afraid that the answer is both obvious and deeply depressing. Prices are skyrocketing in a handful of “star cities” because the rest of the country is dying. It’s becoming increasingly difficult, if not impossible, to maintain a reliable, middle-class income outside of a small handful of locations. I’ve heard people describe Michigan’s plight as “a one-state recession that has gone on for 30 years”. That “recession” has spilled beyond one region and is now the norm in the United States outside of the star cities.

As I’ve implied in previous posts, the 1930s Depression was caused by (among other things) a drop in agricultural commodity prices in the 1920s, which led to rural poverty. History tends to remember the 1920s as a prosperous boom decade, which it was in the cities. In rural America, it was a time of deprivation and misery. The poverty spread. This was a case of technological advancement (a good thing, but one that requires caution) in food production leading to price drops, poverty of increasing scope, and, eventually, a depression so severe that it became worldwide, encouraged opportunistic totalitarianism, and required government intervention in order to escape it.

What happened to food prices in the 1920s is starting to happen to almost all human labor. People whose labor can be commoditized are losing big, which means that capital (including the social and cultural varieties) is king. This shrinks the middle class, increases the importance of social connections, and admits certain varieties of extortion. One of these is the increasing tuition in educational programs, admissible on account of the connections these provide. Another is the high cost of housing in geographical regions where the dwindling supply of middle-class jobs is concentrated, and it’s the same dynamic: as the middle class gets smaller, anything that grants a chance at a connection to a “safe” person, company, or region becomes dramatically more expensive.

Incomes and job availability are getting better in the star cities (in contrast against the continuing recession experienced by the rest of the country) but the improvement is more than wiped out by the increasing costs of housing. The star cities are getting richer and more taxed (largely by real estate markup) at the same time. Outside of the star cities, real estate costs have stopped rising (and have fallen in many places) but the local economies have done worse, canceling out the benefits. Landlords have successfully run a “heads, I win; tails, you lose” racket against the rest of the world. When their locales do well, they get the excess income that should by rights go to the workers. When their locations do poorly, they lose, but not as much as the working people (who lose their jobs as opposed to mere resale value). This has always been the case; landowners have always had that power (and used it). What has changed is that the disparity between “star cities” and the rest of the country has become vast, and it seems to be accelerating.

Rents and property prices in New York and Silicon Valley aren’t going up because those locales are becoming more desirable. That’s not happening. Since high housing costs lead to environmental degradation, traffic problems, crime and cultural decline, it’s quite likely that the reverse is true. Rather, they’re going up because the rest of the country is getting worse– so much worse that a bunch of former-middle-class migrants have to crowd their way in, making enormous sacrifices in doing so, because good jobs anywhere else are becoming scarce. These migrations (caused by the collapse of desirability in other locations) increase demand for housing and allow rents and prices to rise. That is why the rent is too damn high.

The good news is that there is a visible (if not easy to implement) solution: revitalize locations outside of the star cities. Small business formation (without personal liability, which defeats the purpose of the limited liability corporation– a controversial but necessary innovation) needs to be made viable outside of the star cities. Right now, growth companies are only feasible in a small set of locations, because starting the most innovative businesses will require a rich client (New York, Los Angeles) or a venture capitalist (Silicon Valley) but, as innovative and more progressive mechanisms for funding (such as Kickstarter) emerge, that may change. I hope it does. This extreme geographic concentration of wealth and innovation is far from a desirable or even sensible arrangement, and it makes our rents too damn high. We shouldn’t be satisfied with one Silicon Valley, for example. We should strive to have ten of them.