Employees at Google, Yahoo, and Amazon lose nothing if they unionize. Here’s why.

Google, Yahoo, and Amazon have one thing in common with, probably, the majority of large, ethically-challenged software companies. They use stack-ranking, also known as top-grading, also known as rank-and-yank. By top-level mandate, some pre-ordained percentage of employees must fail. A much larger contingent of employees face the stigma of being labelled below-average or average, which not only blocks promotion but makes internal mobility difficult. Stack ranking is a nasty game that executives play against their own employees, forcing them to stab each other in the back. It ought to be ended. Sadly, software engineers do not seem to have the ability to get it abolished. They largely agree that it’s toxic, but nothing’s been done about it, and nothing will be done about it so long as most software engineers remain apolitical cowards who refuse to fight for themselves.

I’ve spent years studying the question of whether it is good or bad for software engineers in the Valley to unionize. The answer is: it depends. There are different kinds of unions, and different situations call for different kinds of collective action. In general, I think the way to go is to create guilds like Hollywood’s actors’ and writers’ guilds, which avoid interfering with meritocracy with seniority systems or compensation ceilings, but establish minimum terms of work, and provide representation and support in case of unfair treatment by management. Stack ranking, binding mandatory arbitration clauses, non-competes, and the mandatory inclusion of performance reviews in a candidate’s transfer packet for internal mobility could be abolished if unions were brought in. So what stands to be lost? A couple hundred dollars per year in dues? Compared to the regular abuse that software engineers suffer in stack-ranked companies, that has got to be the cheapest insurance plan that there is.

To make it clear, I’m not arguing that every software company should be unionized. I don’t think, for example, that a 7-person startup needs to bring in a union. Nor is it entirely about size. It’s about the relationship between the workers and management. The major objections to unionization come down to the claim that they commoditize labor; what once could have had warm-fuzzy associations about creative exertion and love of the work is now something where people are disallowed from doing it more than 10 hours per day without overtime pay. However, once the executives have decided to commoditize the workers’ labor, what’s lost in bringing in a union? At bulk levels, labor just seems to become a commodity. Perhaps that’s a sad realization to have, and those who wish it were otherwise should consider going independent or starting their own companies. Once a company sees a worker as an atom of “headcount” instead of an individual, or a piece of machinery to be “assigned” to a specific spot in the system, it’s time to call in the unions. Unions generally don’t decelerate the commoditization of labor; instead, they accept it as a fait accompli and try to make sure that the commoditization happens on fair terms for the workers. You want to play stack-ranking, divide-and conquer, “tough culture” games against our engineers? Fine, but we’re mandating a 6-month minimum severance for those pushed out, retroactively striking all binding mandatory arbitration clauses in employment contracts should any wrongful termination suits occur, offering to pay legal expenses of exiting employees, and (while we’re at it) raising salaries to a minimum of $220,000 per year. Eat it, biscuit-cutters.

If unions come roaring into Silicon Valley, we can expect a massive fight from its established malefactors. And since they can’t win in numbers (engineers outnumber them) they will try to fight culturally, claiming that unions threaten to create an adversarial climate between engineers and management. Sadly, many young engineers will be likely to fall for this line, since they tend to believe that they’re going to be management inside of 30 months. To that, I have two counterpoints. First, unions don’t necessarily create an adversarial climate; they create a negotiatory one. They give engineers a chance to fight back against bad behaviors, and also provide a way for them to negotiate terms that would be embarrassing for the individual to negotiate. For example, no engineer, while he’s negotiating a job offer, can talk about about ripping out the binding mandatory negotiation clause (it signals, “I’m considering the possibility, however remote, that I might have to sue you”) or fight against over-broad IP assignments (“I plan on having side projects which won’t directly compete with you, but that may compete for my time, attention and affection”) or non-competes (“I haven’t ruled out the possibility of working for a competing firm”). Right now, the balance of power between employers and employees in Silicon Valley is so demonically horrible that simply insisting on having one’s natural and legal rights makes that prospective employee, in HR terms, a “PITA” risk and that will end the discussion right there. Instead, we need a collective organization that can strike these onerous employment terms for everyone.

When a company’s management plays stack-ranking games against its employees, an adversarial climate between management and labor already exists. Bringing in a union won’t create such an environment; it will only make the one that exists more fair. You absolutely want a union whenever it becomes time to say, “Look, we know that you view our labor as a commodity– we get it, we’re not special snowflakes in your eyes, and we’re fine with that– so let’s talk about setting fair terms of exchange”.

Am I claiming that all of Silicon Valley should be unionized? Perhaps an employer-independent and relatively lightweight union like Hollywood’s actors’ and writers’ guilds would be useful. With the stack-rank companies in particular, however, I think that it’s time to take the discussion even further. While I don’t support absolutely everything that people have come to associate with unions, the threat needs to be there. You want to stack-rank our engineers? Well, then we’re putting in a seniority system and making you unable to fire people without our say-so.

At Google, for example, engineers live in perennial fear of “Perf” and “the Perf Room”. (There actually is no “Perf Room”, so when a Google manager threatens to “take you into the Perf Room” or to “Perf you”, it’s strictly metaphorical. The place doesn’t actually exist, and while the terminology often gets a bit rapey– an employee assigned a sub-3.0 score is said to be “biting the pillow”– all that actually happens is that a number is inserted into a computerized form.) Perf scores, which are often hidden from the employee, follow him forever. They make internal mobility difficult, because even average scores make an engineer less desirable as a transfer candidate than a new hire– why take a 50th- or even 75th-percentile internal hire and risk angering the candidate’s current manager, when you can fill the spot with a politically unentangled external candidate? The whole process exists to deprive the employee of the right to state her own case for her capability, and to represent her performance history on her terms. And it’s the sort of abusive behavior that will never end until the executives of the stack-ranked companies are opposed with collective action. It’s time to take them, and their shitty behaviors, into the Perf Room for good.

Anger’s paradoxical value, and the closing of the Middle Path in Silicon Valley

Anger

Anger is a strange emotion. I’ve made no efforts to conceal that I have a lot of it, and toward such vile targets (such as those who have destroyed the culture of Silicon Valley and, by extension due to that region’s assigned status of leadership, the technology industry) that most would call it “justified”. Anger is, however, one of those emotions that humans prefer to ignore. It produces (in roughly increasing order of severity) foul language, outbursts, threats, retaliations and destroyed relationships, and frank physical violence. The fruits of anger are disliked, and not for bad reasons, because most of those byproducts are horrible. Most anger is, additionally, a passing and somewhat errant emotion; the target of the anger might not be deserving of violence, retaliation, or even insults. In fact, some anger is completely unjustified; so it’s best not to act on anger until we’ve had a chance to process and examine it. The bad kind of anger tends to be short-lived but, if humans acted on it when it emerged, we wouldn’t have made it this far as a species. Still, most of us agree that much anger, especially the long-lived kind that doesn’t go away, is justified in some moral sense. To be angry, three years later, at an incompetent driver is deemed silly. To be angry over a traumatic incident or a life-altering injustice is held as understandable.

However, is justified anger good? The answer, I would say, is paradoxical. For the individual, anger isn’t good. I’m not saying that the emotion should be ignored or “bottled in”. It should be acknowledged and let to pass. Holding on to it forever is, however, counterproductive. It’s stressful and unpleasant and sometimes harmful. As Buddha said, “holding on to anger is like grasping a hot coal with the intent of throwing it at someone else; you are the one who gets burned.” Anger, held too long, is a toxic and dreadful emotion that seems to be devoid of value– to the individual. This isn’t news. So what’s the issue? Why am I interested in talking about it? Because anger is extremely useful for the human collective.

Only anger, it often seems, can muster the force that is needed to overthrow evil. Let’s be honest: the problem has its act together. We aren’t going to overthrow the global corporate elite by beaming love waves at them. No one is going to liberate the technology industry from its Damaso overlords with a message of hope and joy alone. We can probably get them to vacate without forcibly removing them, but it’s not going to happen without a threatening storm headed their way. Any solution to any social problem will involve some people getting hurt, if only because the people who run the world now are willing to hurt other people, by the millions, in order to protect their positions.

Anger is, I’m afraid, the emotion that spreads most quickly throughout a group, and sometimes the only thing that can hold it together. Of course, this can be a force for good or for evil. Many of history’s most noted ragemongers were people did bad to the world. I would, however, say that this fact makes the argument that, if good people shy away from the job of spreading indignation and resentment, then only evil people will being doing it. For me, that’s an upsetting realization.

Whether we’re talking about “yellow journalism” or bloggers or anyone else who fights for social change, spreading anger is a major part of what they do. It’s something that I do, often consciously. The reason, when I discuss Silicon Valley’s cultural problems, for me to mention Evan Spiegel or Lucas Duplan (for the uninitiated, they are two well-connected, rich, unlikeable and unqualified people who were made startup founders) is because they inspire resentment and hatred. Dry discussions of systemic problems don’t lead to social change; they lead to more dry debate and that debate leads to more debate, but nothing ever gets done until someone “condescends” to talk to the public and get them pissed off. For that purpose, a Joffrey figure like Evan Spiegel is just much “catchier”. This is why founder-quality issues like Duplan and Spiegel, and “Google Buses”, are a better vector of attack against Sand Hill Road than the deeper technical reasons (e.g. principal-agent problems that take kilowords to explain in detail) for that ecosystem’s moral failure. It’s hard to get people riled up about investor collusion, and much easier to point to this picture of Lucas Duplan.

This current incarnation of Silicon Valley needs to be pushed aside and discarded, because it’s hurting the world. The whole ecosystem– the shitty corporate cultures with the age discrimination and open-plan fetishism, the juvenile talk about “unicorns” because it’s a cute way of covering up the reality of an industry that only cares about growth for its own sake, the insane short-term greed, the utter lack of concern for ethics, the investor collusion, and the founder-quality issues– needs to be burned to the ground so we can build something new. And I have enough talent that, while I can’t change anything on my own, I can contribute. When I (unintentionally) revealed the existence of stack-ranking at Google to the public, I damaged that company’s reputation. The degree to which I did so is probably not significant, relative to its daily swings on the stock market, but with enough people in the good fight, victory is possible.

Here’s what I don’t like. Clearly, anger is painful for the person experiencing it. As an individual, I would be better to let it pass. I can personally deal with the pain of it, but it’s leads me to question whether there is social value in disseminating it. And yet, without people like me spreading and multiplying this justified anger at the moral failure of Silicon Valley, no change will occur and evil will win. This is what makes anger paradoxical. As an individual, the prudent thing to do is to let it go. For society, moral justice demands that it spread and amplify. Even if we accept that collective anger can just as easily be a force for bad (and it can) we still have to confront the fact that if good people decline to spread and multiply anger against evil, then the sheer power of collective anger will be wielded only by evil. We need, as a countervailing force, for the good people to comprehend and direct the force of collective anger.

The Middle Path

Why do I detest Silicon Valley? I don’t live there, and I have better options than to take a pay cut in exchange for 0.03% of a post-A startup, so why does that cesspool matter to me at this point? In large part, it’s because the Bay Area wasn’t always a cesspool. It used to be run by lifelong engineers for engineers, and now it’s some shitty outpost of the mainstream business culture, and I find that devolution to be deplorable. The Valley used to be a haven for nerds (here, meaning people who value intellectual fulfillment more than maximizing their wealth and social status) and now it’s become a haven for MBA-culture rejects who go West to take advantage of the nerds. It’s a joke, it’s awful, and it’s very easy to get angry at it. But why? Why is it worth anger? Shouldn’t we divest ourselves, emotionally, and be content to let that cesspool implode?

I don’t care about Silicon Valley, meaning the Bay Area, but I do care about the future of the technology industry. Technology is just too important to the future of humanity for us to ignore it, or to surrender it to barbarians. The technology industry used to represent the Middle Path between the two undesirable options of (a) wholesale subordination to the existing elite and (b) violent revolt. It was founded by people who neither wanted to acquiesce to the Establishment nor to overthrow it with physical force. They just wanted to build cool things, to indulge their intellectual curiosities, and possibly to outperform an existing oligarchy and therefore refute its claims of meritocracy.

Unfortunately, Silicon Valley became a victim of its own success. It outperformed the Establishment and so the Establishment, rather than declining gracefully into lesser relevance, found a way to colonize it through the good-old-boy network of Bay Area venture capital. To be fair, the natives allowed themselves to be conquered. It wasn’t hard for the invaders to do, because software engineers have such a broken tribal identity and such a culture of foolish individualism that divide-and-conquer tactics (like– for a modern example that illustrates how fucked we are as a tribe, “Agile”/Scrum– which has evolved into a system where programmers rat each other out to management for free) worked easily. Programmers are, not surprisingly, prone to a bit of cerebral narcissism, and the result of this is that they lash out with more anger at unskilled programmers and bad code than against the managerial forces (lack of interest in training, deadline culture) that created the bad programmers and awful legacy code in the first place. It’s remarkably easy for a businessman to turn a group of programmers against itself, so much so that any collective action (either a labor union, or professionalization) by programmers remains a pipe dream. The result is a culture of individualism and arrogance where almost every programmer believes that most of his colleagues are mouth-breathing idiots (and, to be fair, most of them are severely undertrained). There’s a joke in Silicon Valley about “flat” software teams where every programmer considers himself to be the leader, but it’s not entirely a joke. In the typical venture-funded startup, the engineers each believe that they’ll have investor contact within 6 months and founder/CEO status inside of 3 years. (They wouldn’t throw down 90-hour weeks if it were otherwise.) By the time programmers are old enough to recognize how rarely that happens (and how even more rarely people actually get rich in this game, unless they were born into the contacts that put them on the VC side or can have them inserted in high positions in portfolio companies, allowing diversification) they’re judged as being too old to program in the Valley. That is too convenient for those in power to be attributed to coincidence.

Sand Hill Road needs to be taken down because it has blocked the Middle Path that used to exist in Silicon Valley, and that should exist, if not in that location, in the technology industry somewhere. The old Establishment might have its territory chipped away (harmlessly, most often, because large corporations don’t die unless they do it to themselves) by technology startups, and it was content to have this happen because, so often, the territory it lost was what it didn’t understand well enough to care about. The new Establishment, on Sand Hill Road, is harder to outperform because, if it sees you as a threat, it will fund your competitors, ruin your reputation, and render your company unable to function.

I don’t believe that Silicon Valley’s closing of the Middle Path will be permanent, and it’s best for all of us that it not be. I am obviously not in favor of subordination to the global elite. They are the enemy, and something will have to be done about, or at least around, them in order to reverse the corruption and organizational decay that they’ve inflicted on the world. On the other hand, I view violent revolution as an absolute last resort. Violence is preferable to subordination and defeat, but nonetheless it is usually the absolute worst possible way to achieve something. Disliking the extremes, I want the moderate approach: effective opposition to the enemies of progress, without the violence that so easily leads to chaos and the harm of innocents. So when the mainstream business elite enters a space (like technology) in which it does not belong, colonizes it, and thereby blocks the Middle Path, it’s a scary proposition. Of course I cannot predict the future, but I can perceive risks; and the closing of the Middle Path represents too much of a risk for us to allow it. If the Middle Path has closed in venture-funded technology in the Valley, it’s time to move on to something else.

Do I think that humanity is doomed, simply because a man-child oligarchy in one geographical area (“Silicon Valley”) has closed the Middle Path when it existed in their location? Of course not. Among those in the know, the VC-engorged monstrosity that now exists in the Valley has ceased to inspire, or even to lead. It seems, then, that it is time to move past it, and to figure out where to open a new Middle Path.

If getting people to do this– to recognize the importance of doing this– requires a bit of emotional appeal along a vector such as anger or resentment, I’ll be around and I know how to pull it off.

Technology is run by the wrong people

I have a confession to make: I have a strong tendency to “jump”, emotionally and intellectually, to the biggest problem that I see at a given time. I’ve tempered it with age, because it’s often counterproductive. In organizational or corporate life, solving the bigger problem, or jumping to the biggest problem that you have the ability to solve, often gets you fired. Most organizations demand that a person work on artificial small problems in a years-long evaluative period before he gets to solve the important problems, and I’ve personally never had the patience to play that game (and it is a political game, and recognizing it as such has been detrimental, since it would be harder to resent it had I not realized what it was) at all, much less to a win. The people who jump to the biggest problem are received as insubordinate and unreliable, not because they actually are unreliable, but because they reliably do something that those without vision tend both not to understand, and to dislike. There are too many negative things (whether there is truth or value in them, or not) that can be said, in the corporate theater, about a person who immediately jumps to the biggest problem– she only wants to work on the fun stuff, she’s over-focused and bad at multi-tasking, she’s pushy and obsessive, she wants the boss’s boss’s job and isn’t good at hiding it– and it’s only matter of time before many of them are actually said.

Organizations need people like this, if they wish to survive, and they know this; but they also don’t believe that they need very many of them. Worse yet, corporate consistency mandates that the people formally trusted (i.e. those who negotiated for explicitly-declared trust in the form of job titles) be the ones who are allowed to do that sort of work. The rest, should they self-promote to a more important task than what they’ve been assigned, are considered to be breaking rank and will usually be fired. People dislike “fixers”, especially when their work is what’s being fixed. It’s probably no surprise, then, that modern organizations, over time, become full of problems that most people can see but no one has the courage to fix.

Let’s take this impulse– attack the bigger problem or, better yet, find an even bigger one– and discuss the technology industry. Let’s jump away from debates about tools and get to the big problems. What is the biggest problem with it? Tabs versus spaces? Python versus Ruby? East Coast versus West versus Midwest? Hardly. Don’t get me wrong: I enjoy debating the merits and drawbacks of various programming languages. I mean, I may not like the language Spoo as much as my favored tools, but I’d never suggest that the people promoting Spoo are anything but intelligent people with the best intentions. We may disagree, but in good faith. Except in security, discussion of bad-faith players and their activity is rare. It’s almost taboo to discuss that they exist. In fact, Hacker News now formally censors “negativity”, which includes the assertion or even the suggestion that there are many bad actors in the technology world, especially in Silicon Valley and even more especially at the top. But there are. There is a level of power, in Silicon Valley, at which malevolent players become more common than good people, and it’s people at that level of power who call the most important shots. If we ignore this, we’re missing the fucking point of everything.

There is room for this programming language and that one. That is a matter of taste and opinion, and I have a stance (static as much as possible) but there are people of equal or superior intellectual and moral quality who disagree with me. There is room for functional programming as well as imperative programming. Where there is no nuance (unless one is a syphilitic psychopath) is on this statement: technology, in general, is run by the wrong people. While this claim (“wrong people”) is technically subjective just as far as color is technically subjective, we can treat it as a working fact, just as the subjectivity of color does not excuse a person running a red light under the argument that he perceived it as green. Worse, the technology industry is run by bad people, and by bad, I don’t mean that they are merely bad at their jobs; I mean that they are unethical, culturally malignant, and belong in jail.

Why is this? And what does it mean? Before answering that, it’s important to understand what kind of bad people have managed to push themselves into the top ranks of the technology industry.

Sadly, most of the people who comprise the (rising, and justified) anti-technology contingent don’t make a distinction between me and the actual Bad Guys. To them, the $140k/year engineers and the $400k/year VP/NTWTFKs (Non-Technical Who-The-Fuck-Knows) getting handed sinecures in other peoples’ companies by their friends on Sand Hill Road are the same crowd. They perceive classism and injustice, and they’re right, but they’re oblivious to the gap between the upper-working-class engineers who create technological value (but make few decisions) and the actually upper class pedigree-mongers who capture said value (and make most of the decisions, often badly) and who are at risk of running society into the ground. (If you think this is an exaggeration, look at house prices in the Bay Area. If these fuckers can’t figure out how to solve that problem, then who in the hell can trust them to run anything bigger than techie cantrips?) Why do the anti-technology protestors fail to recognize their true enemies, and therefore lose an opportunity to forge an alliance with the true technologists whose interests have also been trampled by the software industry’s corporate elite? Because we, meaning the engineers and true technologists, have let them.

As I see it, the economy of the Bay Area (and, increasingly, the U.S.) has three “estates”. In the First Estate are the Sand Hill Road business people. They don’t give a damn about technology for its own sake, and they’re an offshoot of the mainstream business elite. After failing in private equity or proving themselves not to be smart enough to do statistical arbitrage, they’re sent West to manage nerds, and while they’re poor in comparison to the hedge-fund crowd, they’re paid immensely by normal-people (or even Second Estate) standards. As in the acronym “FILTH” (Failed In London, Try Hong (k)ong), they are colonial overseers who weren’t good enough for leadership positions in the colonizing culture (the mainstream business/MBA culture) so they were sent to California to wave their dicks in the air while blustering about “unicorns“. In the Second Estate are the technologists and engineers who actually write the code and build the products; their annual incomes tend to top out around $200,000 to $300,000– not bad at all, but not enough to buy a house in the Bay Area– and becoming a founder is (due to lack of “pedigree”, which is a code word for the massive class discrepancy between them and the VCs they need to pitch) is pretty much out of the question. The Third Estate are the people, of average means, who feel disenfranchised as they are priced out of the Bay Area. They (understandably) can’t quite empathize with Second-Estate complaints about the cost of housing and pathetic equity slices, because they actually live on “normal people” (non-programmer, no graduate degree) salaries. As class tensions have built in San Francisco, the First Estate has been exceptionally adept at diverting Third-Estate animosity toward the Second, hence the “Google Bus” controversies. This prevents the Second and Third Estates from realizing that their common enemy is the First Estate, and thereby getting together and doing something about their common problem.

This echoes a common problem in technology companies. If a tech-company CEO in France or Germany tried to institute engineer stack ranking, an effigy would be burned on his own front lawn, his vehicle would be vandalized if not destroyed, and the right thing would happen (i.e., he’d revert the decision) the next day. An admirable trait that the European proletariat has, and that the American one lacks, is an immunity to divide-and-conquer tactics. The actual enemies of the people of San Francisco are the billionaires who believe in stack ranking and the NIMBYs, not 26-year-old schlubs who spend 3 hours per day on a Google bus. Likewise, when software engineers bludgeon each other over Ruby versus Java, they’re missing the greater point. The enemy isn’t “other languages”. It’s the idiot executive who (not understanding technology himself, and taking bad advice from a young sociopath who is good at pretending to understand software) instituted a top-down one-language policy that was never needed in the first place.

Who are the right people to run technology, and why are the current people in charge wrong for the job? Answering the first question is relatively easy. What is technology? It’s the application of acquired knowledge to solve problems. What problems should we be solving? What are the really big problems? Fundamentally, I think that the greatest evil is scarcity. From the time of Gilgamesh to the mid-20th century, human life was dominated by famine, war, slavery, murder, rape and torture. Contrary to myths about “noble savages”, pre-industrial men faced about a 0.5%-per-year chance of death in violent conflict. Aberrations aside, most of horrible traits that we attribute to “human nature” are actually attributable to human nature under scarcity. What do we know about human nature without scarcity? Honestly, very little. Even the lives of the rich, in 2015, are still dominated by the existence of scarcity (and the need to protect an existence in which it is absent). We don’t have a good idea of what “human nature” is when human life is no longer dominated either by scarcity or the counter-measures (work, sociological ascent) taken to avoid it.

The goal of a technologist is to make everyone rich. Obviously, that won’t happen overnight, and it has to be done in the right way. It’s better to do it with clean energy sources and lab-grown meat than with petroleum and animal death. The earth can’t afford to have people eating like Americans and able to fly thousands of miles per year until certain technological problems are solved (and I, honestly, believe that they can be solved, and aren’t terribly difficult). We have a lot of work to do, and most of us aren’t doing the right work, and it’s hard to blame the individual programmer because there are so few jobs that enable a person to work on fundamental problems. Let’s, however, admit to a fundamental enemy: scarcity. Some might say that death is a fundamental enemy, especially in the Singularitarian crowd. I strongly disagree. Death is an unknown– I look forward to “the other side”, and if I am wrong and there is nothing on the other side, then I will not exist to be bothered by the fact– but I see no reason to despise it. Death will happen to me– even a technological singularity can only procrastinate it for a few billion years– and that is not a bad thing. Scarcity, on the other hand, is pretty fucking awful– far more deserving of “primal enemy” status than death. If scarcity in human life should continue indefinitely, I don’t want technological life extension. Eighty years of a mostly-charmed life in a mostly-shitty world, I can tolerate. Two hundred? Fuck that shit. If we’re not going to make major progress on scarcity in the next fifty years, I’ll be fucking glad to be making my natural exit.

Technologists (and, at this point, I’m speaking more about a mentality and ideology than a profession, because quite a large number of programmers are anti-intellectual fuckheads just as bad as the colonial officers who employ them) are humanity’s last resort in the battle against scarcity. Scarcity has been the norm, along with the moral corrosion that comes with it, for most of human history, and if we don’t kill it soon, we’ll destroy ourselves. We learned this in the first half of the 20th century. Actual scarcity was on the wane even then, because the Industrial Revolution worked; but old, tribalistic ideas– ideas from a time when scarcity was the rule– caused a series of horrendous wars and the deployment of one of the most destructive weapons ever conceived. We ought to strive to break out of such nonsense. There will always be inequalities of social status, but we ought to aim for a world in which being “poor” means being on a two-week waiting list to go to the Moon.

Who are the right people to run technology? Positive-sum players. People who want to make everyone prosperous, and to do so while reducing or eliminating environmental degradation. I hope that this is clear. There are many major moral issues in technology around privacy, safety and security, and our citizenship in the greater world. I don’t mean to make light of those. Those are important and complicated issues, and I won’t claim that I always have the right answer. Still, I think that those are ancillary to the main issue, which is that technology is not run by positive-sum players. Instead, it’s run by people who hoard social access, damage others’ careers even when there is little to gain, and play political games against each other and against the world.

To make it clear, I don’t wish to identify as a capitalist or a socialist, or even as a liberal or conservative. The enemy is scarcity. We’ve seen that pure capitalism and pure socialism are undesirable and ineffective at eliminating it; but if it were otherwise, I’d welcome the solution that did so. It’s important to remember that scarcity itself is our adversary, and not some collection of ideas called an “ideology” and manufactured into an “other”. I don’t think that one needs to be a liberal or leftist necessarily in order to qualify as a technologist. This is about something different than the next election. This is about humanity and its long-term goals.

All of that said, there are people in society who prosper by creating scarcity. They keep powerful social organizations and groups closed, they deliberately concentrate power, and they excel at playing zero-sum games. And here’s the problem: while such people are possibly rarer than good-faith positive-sum players, they’re the ones who excel at organizational politics. They shift blame, take credit, and when they get into positions of power, they create artificial scarcity. Why? Because scarcity rarely galvanizes the have-nots against the haves; much more often, it creates chaos and distrust and divides the have-nots against each other, or (as in the case of San Francisco’s pointless conflict between the Second and Third Estates) pits the have-a-littles against the have-nothings.

Artificial scarcity is, in truth, all over the place in corporate life. Why do some people “get” good projects and creative freedom while others don’t? Why are many people (regardless of performance and the well-documented benefits of taking time off) limited to two or three weeks of vacation per year? Why is stack ranking, which has the effect of making decent standing in the organization a limited resource, considered morally acceptable? Why do people put emotional investment into silly status currencies like control over other peoples’ time? It’s easy to write these questions off as “complex” and decline to answer them, but I think that the answer’s simple. Right now, in 2015, the people who are most feared and therefore most powerful in organizational life are those who can create and manipulate the machinery of scarcity. Some of that scarcity is intrinsic. It is not an artifact of evil that salary pools and creative freedom must fall under some limit; it is the way things are. However, an alarming quantity of that scarcity is not. How often is it that missing a “deadline” has absolutely no real negative consequence on anything– other than annoyance to a man-struating executive who deserves full blame for inventing an unrealistic timeframe in his own mind? Very. How many corporations would suffer any ill effect if their stack ranking machinery were abolished? Zero, and many would find immediate cultural improvements. Artificial scarcity is all over the place because there is power to be won by creating it; and, in the corporate world, those who acquire the most power are those who learn how navigate environments of artificial scarcity, often generating it as it solidifies their power once gained.

Who runs the technology industry? Venture capitalists. Even though many technology companies are not venture-funded, the VC-funded companies and their bosses (the VCs) set the culture and they fund the companies that set salaries. Most of them, as I’ve discussed, are people who failed in the colonizing culture (the mainstream MBA/business world) and went West to boss nerds around. Having failed in the existing “Establishment” culture, they (somewhat unintentionally) create a new one that amplifies its worst traits, much in the way that people who are ejected from an established and cool-headed (in relative terms) criminal organization will often found a more violent one. So they’ve taken the relationship-driven anti-meritocracy for which the Harvard-MBA world is notorious, and then they went off and made a world (Sand Hill Road) that’s even more oligarchical, juvenile, and chauvinistic than the MBA culture that it splintered off from. Worse than being zero-sum players, these are zero-sum players whose being rejected by the MBA culture (not all of whose people are zero-sum players; there are some genuine good-faith positive-sum players in the business world) was often due to their lack of vision. And hence, we end up with stack ranking. Stack ranking would exist except for the fact that many technology companies are run by “leftover” CEOs and VCs who couldn’t get leadership jobs anywhere else. And because of the long-standing climate of terrible leadership in this industry, we end up with Snapchat and Clinkle but scant funding for clean energy. We end up with a world in which most software engineers work on stupid products that don’t matter.

In 2015, we live in a time of broad-based and pervasive organizational decline. While Silicon Valley champions all that is “startup”, another way to perceive the accelerated birth-and-death cycle of organizations is that they’ve become shorter-lived and more disposable in general. Perhaps our society is reaching an organizational Hayflick limit. Perhaps the “macro-age” of our current mode of life is senescent and, therefore, the organizations that we are able to form undergo rapid “micro” aging. There is individual gain, for a few, to be had in this period of organizational decay. A world in which organizations (whether young startups or old corporate pillars) are dying at such a high rate is one where rapid ascent is more possible, especially for those who already possess inherited connections (because, while organizations themselves are much more volatile and short-lived, the people in charge don’t change very often) and can therefore position themselves as “serial entrepreneurs” or “visionary innovators” in Silicon Valley. What is being missed, far too often, is that this fetishized “startup bloom” is not so much an artifact of good-faith outperformance of the Establishment, but rather an opportunistic reaction to a society’s increasing inability to form and maintain organizations that are worth caring about. Wall Street and Silicon Valley both saw that mainstream Corporate America was becoming inhospitable to people with serious talent. Wall Street decided to beat it on compensation; Silicon Valley amped up the delusional rhetoric about “changing the world”, the exploitation of young, male quixotry, and the willingness to use false promises (executive in 3 years! investor contact!) to scout talent. That’s where we are now. The soul of our industry is not a driving hatred of scarcity, but the impulse to exploit the quixotry of young talent. If we can’t change that, then we shouldn’t be trusted to “change the world” because our changes shall be mostly negative.

Technology must escape its colonial overseers and bring genuine technologists into leading roles. It cannot happen fast enough. In order to do, it’s going to have to dump Sand Hill Road and the Silicon Valley economy in general. I don’t know what will replace it, but what’s in place right now is so clearly not working that nothing is lost by throwing it out wholesale.

Continuous negotiation

After reading this piece about asking for raises in software engineering, I feel compelled to share a little something about negotiation. I can’t claim to be great at it, personally. I know how it works but– I’ll be honest– negotiating can be really difficult for almost all of us, myself included. As humans, we find it hard to ask someone for a favor or consideration when the request might be received badly. We also have an aversion to direct challenge and explicit recognition of social status. It’s awkward as hell to ask, “So, what do you think of me?” Many negotiations, fundamentally, are conceived to be uncomfortably close to that question. There is one thing that I’ve learned about successful workplace negotiators: they do it continuously, persistently, and creatively.

The comic-book depiction of salary negotiation is one in which the overpaid, under-appreciated employee (with arms full of folders and papers representing “work”) goes into her office and asks her pointy-haired boss for a raise: a measly 10 percent increase that she has, no doubt, earned. In a just world, she’d get it; but in the real world, she gets a line about there being “no money in the budget”. This depiction of salary negotiation gets it completely wrong. It sets it up as an episodic all-or-nothing affair in which the request must either be immediately granted (that’s rare) or the negotiator shall slink away, defeated.

Here’s why that scenario plays out so badly. Sure, she deserves a raise; but at that moment in time, the boss is presented with the unattractive proposition of paying more (or, worse yet, justifying higher payment to someone else) for the same work, rejects it, and the employee slinks away defeated and bitter. If this scenario plays out as described, it’s often a case where she failed to recognize the continually occurring opportunities for micro-negotiations.

First of all, if someone asks for a raise in good faith and is declined, that’s an opportunity to ask for something else: a better title, improved project allocation, a conference budget, and possibly the capacity to delegate undesirable work. Even if “there’s no money in the budget” is a flat-out lie, there’s nowhere to proceed on the money front– you can’t call your boss a liar and say, “I think there is” or “Have you checked?”– so you look for something else that might be non-monetary, like a better working space. Titles are a good place to start. People tend to think that they “don’t matter”, but they actually matter a great deal, as public statements about how much trust the organization has placed in a person’s competence. They’re also given away liberally when managers aren’t able to give salary increases to people who “obviously deserve” them. Don’t get me wrong: I’d take a 75% pay raise over a fancy title; but if a raise isn’t in the question, then I’d prefer to ask for something else that I might actually get. When cash is tight, titles are cheap. As things improve, who gets first pick of the green-field, new projects that emerge? The people with the strongest reputations, of which titles are an important and formal component. When cash is abundant, it usually flows to the people on or near those high-profile projects.

Many things that we do, as humans, are negotiations, often subtle. Take a status meeting (as in project status, of course). Standup is like Scrabble. Bad players focus on the 100-point words. Good players try not to open up the board. Giving elaborate status updates is to focus on the 100-point words at the expense of strategic play. A terse, effective, update is a much better play. If you open yourself up to a follow-on question about project status (e.g. why something took a certain length of time, or needed to be done in a certain way) then you’ve done it wrong. You put something on the board that should have never gone there. The right status (here meaning social status) stance to take when giving (project) status is: “I will tell you what I am doing, but I decide how much visibility others get into my work, because there are people who are audited and people who are implicitly trusted and the decision has already been made that I’m in the second category… and I’m pretty sure we agree on this already but if you disagree, we’re gonna fucking dance.” When you give a complete but terse status update, you’re saying, “I’m willing to keep you appraised of what I’m up to, because I’m proud of my work, but I refuse to justify time because I work too hard and I’m simply too valuable to be treated as one who has to justify his own working time.”

Timeliness is another area of micronegotiation, and around meetings one sees a fair amount of status lateness (mixed with “good faith” random lateness that happens to everyone). The person who shows up late to a status meeting is saying, “I have the privilege of spending less time giving status (a subordinate activity) than the rest of you”. The boss who makes the uncomfortable joke-that-isn’t about that person being habitually late is saying, “you’re asking for too much; try being the 4th-latest a couple of times”. As it were, I think that status lateness is an extremely ineffective form of micronegotation– unless you can establish that the lateness is because you’re performing an important task. Some “power moves” enhance status capital by exploiting the human aversion to cognitive dissonance (he’s acting like an alpha, so he must be an alpha) but others spend it, and status lateness tends to be one that spends it, because lateness is just as often a sign of sloppiness as of high value. Any asshole can be late, and the signature behavior of a true high-status person is not habitual minor lateness. In fact, actual high-power people, in general, are punctual and loyal and willing to do the grungiest of the grunge work for an important project or mission, but become magically unavailable for the unimportant stuff. If you’re looking to ape the signature of a high-power person (and I wouldn’t recommend achieving it via status lateness, because there are better ways) you shouldn’t do it by being 10 minutes late for every standup. That just looks sloppy. You do it by being early or on time, most of the time, and missing a few such meetings completely for an important reason. (“Sorry that I missed the meeting. I was busy with <something that actually matters.>”) Of course, you have to do this in a way that doesn’t offend, humiliate, or annoy the rest of the team, and it’s so hard to pull that off with status lateness that I’d suspect that anyone with the social skills to do it does not need to take negotiation advice from the likes of me.

Most negotiation theory is focused on large, episodic negotiations as if those were the way that progress in business is made. To be sure, those episodic meetings matter quite a bit. There’s probably a good 10-40 percent of swing space (at upper levels, much more!) in terms of the salary available to a person at a specific career juncture. However, what matters just as much is the preparation through micronegotations. Someone with the track record of a 10-years-and-still-junior engineer isn’t isn’t in the running for $250,000/year jobs no matter how good he is at episodic salary negotiations. It’s hard to back up one’s demand for a raise if one is not perceived as a high performer, and that has as much to do with project allocation as with talent and raw exertion, and getting the best projects usually comes down to skilled micronegotiations (“hey, I’d like to help out”). In the workplace, when it comes to higher pay or status, the episodic negotations usually come far too late– after a series of missed micronegotiation opportunities. One shouldn’t wait until one is underpaid, underappreciated, under-challenged, or overwhelmed with uninteresting work, because “the breaking point” is too late. The micronegotiations have to occur over time, and they must happen so fluently that most people aren’t even aware that the micronegotiations exist.

One upside of micronegotiation over episodic negotiation is that it’s rarely zero-sum. When you ask for a $20,000 raise directly (instead of something that doesn’t cost anything, like an improved title or more autonomy or a special project) you are marking a spot on a zero-sum spectrum, and that’s not a good strategy because you want your negotiating partner to be, well, a partner rather than an adversary. Micronegotiations are usually not zero-sum, because they usually pertain to matters that have unequal value to the parties involved. Let’s say that you work in an open-plan office. For programmers, they’re suboptimal and it’s probably not wise to ask for a private office; but some seats are better than others. Noise can be tuned out; visibility from behind is humiliating, stress-inducing, and depicts a person as having low social status. If you say to your boss, “I think we agree that I have a lot of important stuff on my plate, and I want the next seat in row A that becomes available”, getting a wall at your back, you’re not marking a spot on a zero-sum spectrum, because the people who make the decision as to whether you get a Row-A seat are generally not competing with you for that spot. So it’s no big deal for them to grant it to you. Instead, you’re finding a mutually beneficial solution where everyone wins: you get a better working space, and you’re no longer seen from behind (bringing a subtle improvement to the perception of your status, character, and competency, because the wall at your back depicts you as one who doesn’t fully belong in an open-plan office, but is “taking one for the team” by working in the pit) while your boss gets more output and is being asked for a favor (cf. Ben Franklin) that will demand less from him than a pay raise under a budget freeze.

The problem with software engineers isn’t that they’re bad at episodic salary negotiations. No one is good at those. If you’ve let yourself become undervalued by such a substantial amount that it “comes to a head”, you’re in a position that takes a lot of social acumen to talk one’s way out of. It’s that they aren’t aware of the micronegotiations that are constantly happening around them. To be fair, many micronegotiations seem like the opposite: humility. When you hold the elevator for someone, you’re not self-effacingly representing your time as unimportant; instead, you’re showing that you understand the other person’s value and importance, which is a way of encouraging the other person to likewise value you. The best micronegotiators never seem to be out for themselves, but looking out for the group. It’s not “let’s get this shitty task off my plate and throw it into some dark corner of the company” but, “let’s get together and discuss how to remove some recurring commitments from the team”.

What does good negotiation look like? Well, I’m at 1,700 words, and it would take another 17,000 to scratch the surface, and I’m far from being an expert on that topic. What it isn’t, most of the time, is formal and episodic. It’s continuous, and long-term-oriented, and often positive-sum. When you ask for something, whether it’s a pay raise or a better seat in the office, it’s OK to walk away without it. What you can’t leave on the table is your own status; you can leave as one who didn’t get X, but you can’t leave as a person who didn’t deserve X. If your boss can’t raise your pay, get a title bump and better projects, and thank him in advance for keeping you in mind when the budget’s more open. If a wall at your back or a private office isn’t in the cards, then get a day per week of WFH and make damn sure that it’s your most productive day. This way, even if you’re not getting exactly the X that you asked for, you’re allowing a public statement to stand that, once an X becomes available, you deserve it.

Underappreciated workers don’t need to read more about episodic negotiations and BATNA and “tactics”. They need to learn how to play the long game. Long-game negotiation advice doesn’t sell as well because, well, it takes years before results are achieved; but, I would surmise, it’s a lot more effective.

Cool vs. powerful

Early this morning, this article crossed my transom: Why I Won’t Run Another Startup. It’s short, and it’s excellent. Go read it.

It brought to mind an interesting social dynamic that, I think, is highly relevant to people trying to position themselves in an economy that is increasingly fluid, but still respects many of the old rules. In my mind, the key quote is here, and it agrees with my own personal experience in and around startups, having been on both sides of the purchasing discussion:

Every office-bound exec wants to love a startup. Like a pet. But no one wants to buy from a startup. Especially big companies. Big companies want to buy from big, stable businesses. They want to trust that you’ll still be around in a few years. And their people need to feel you’re a familiar name.

Startups are cool. Someone who is putting his reputation and years of emotional and financial investment at risk, for gold or glory, conforms to a charismatic archetype. That “cool” status might beget power– but usually not. People like scrappy underdogs, but they don’t trust them. Being “scrappy” or “lean” makes you cute, and it might inspire in others a mild desire to protect you, but you don’t have power until people want you to protect them.

Nightclubs

One of the more obvious examples of “cool versus powerful” is in an urban nightclub scene, which has its own intriguing sociology. Nightclub and party scenes are staunchly elitist and hierarchical but, at the same time, eager to flout the mainstream concept of social status. A 47-year-old corporate executive worth $75 million might be turned away at the door, while a 21-year-old male model gets in because he knows the promoter. Casinos have a similar dynamic: by design, pure randomness (except in poker and, to a degree, in blackjack) can pull you out to be either a gloating winner or a stupendous loser for the night. The gods of the dice are egalitarian with regard to the “real world”. People are attracted to both of these scene because they have definitions of cool that are often contrary to those of mainstream, “square”, society.

On Saturday night at the club, old status norms are inverted or ignored. In a reversal of uncool corporate patriarchy, the young outrank the old, women outrank men, and having friends who are club promoters matters more than having friends who are hiring managers or investors. Such is “cool”. In fact, cool may be fickle, but it can make a great deal of money while it lasts. Most cool people will be poor, unable either to bottle their own lightning or to exploit others’ electricity in a useful way, but a few who open the right nightclub in the right spot will make millions. Overtly trying to make money (given that most cool people, although being middle- to upper-middle-class in native socioeconomic status, have very little cash on hand due to youth and underemployment) is deliberately uncool. In fact, most of the money made in the cool industry is from uncool people who want in, e.g. investment bankers whose only hope of entry is to drop $750 for a $20 bottle of vodka (“bottle service”).

Cool rarely leads to meaningful social status, and it doesn’t last. I’m writing this at 6:30 on a Wednesday morning in Chicago; at this exact moment and place, who knows which club promoter in L.A. means nothing. (I’m also a 31-year-old married man. Besides, if I did care to try for cool– I wasn’t so successful when I was the right age for it– I’d tell the U.S. nightlife to fuck itself and head for Budapest’s ruin pubs; but that’s another story.) Cool rarely lasts after the age of 30, an age at which people are just starting to have actual power. And while one of the most powerful things (in terms of having a long-term effect on humanity) one can do is to contribute to posterity either as a parent or a teacher, both roles are decidedly uncool.

Open-plan offices

One of my least favorite office trends is that toward cramped, noisy spaces: the open-plan office. Somehow, the Wolf of Wall Street work environment became the coolest layout in the working world. It’s ridiculously ineffective: it causes people to be sicker and less productive, and while the open-plan layout is sold as being “collaborative”, it actually turns adversarial pretty quickly. It’s a recurring 9-hour economy class plane ride to nowhere, which is not exactly the best theater for human relationships or camaraderie. On an airplane, people just want their pills or booze to kick in so they can forget their physical discomfort for long enough to sleep, and they’re so cranky that even the flight attendant offering free beverages annoys them; in an office, they just want to put their headphones on and get something the fuck done.

Why is an open-plan office “cool”? Those who tend to view management in the worst possible light will say that open-plan is about surveillance, control, and ego-fulfillment for the bosses. Lackeys who trust management implicitly actually believe the nonsense about these spaces being “collaborative”. Neither is correct. The open-plan monster is actually about marketing. “Scrappy” startups have to sell themselves constantly to investors and clients. The actual getting done of work is quite subtle. Show me a quiet workplace where the median age is 45, people have offices with doors, half the staff are women, and there are mathematical scribblings on the whiteboards, and you’ve shown me a place where a lot’s getting done, but it doesn’t look productive from the “pattern matching” viewpoint of a SillyCon Valley venture capitalist. At 10:30 on a random Tuesday, all I’m going to see are people (old people! women with pictures of kids on their desks! people who leave at 5:30!) typing abstract symbols into computers. Are they writing formal verification software that will save lives… or are they playing some complicated text adventure game that happens to run in emacs and just look like Haskell code? If I’m an average VC, I won’t have a clue. Show me a typical open-plan startup office, and it immediately looks frantically busy, even if little’s getting done.

Being in an open-plan office makes you cool but it lowers your social status. There’s no contradiction there, because coolness and power often contradict. It makes you cool because it shows that you’re young and adaptable to the startup’s ravenous appetite for attractiveness– to investors and clients. The company’s not established or trusted yet, so it needs to strike a powerful image, and if you work in a trading-floor environment (for 1/7th of what your trading counterpart is paid in order to compensate for that environment) then you’re doing your part to create that image. You’re pitching in to the startup’s overarching need to market itself; you’re a team-player. (If you want to get actual work done, do it before 10:00 or after 5:00.) By accepting the otherwise degrading work situation of being visible from behind, you’re part of what makes that “scrappy underdog” an emotional favorite: the cool factor.

All of that said, people with status and power avoid visibility into many aspects of their work. Always. This is visible even in physical position. Even in an “egalitarian” open-plan office, the higher status people will, over time as seats shuffle, be less visible from behind than the peons. A newly-hired VP might face undesirable lines of sight in his first six months, but after a couple years, he’ll be in the row with a wall at his back.

One thing that I have learned is that it’s best if no one knows how hard you’re working. I average about 50 hours per week but, occasionally, I slack and put in a 3-hour day. Other times, I throw down and work 14-hour days (much of that at home). I prefer that no one know which is happening at the time. I certainly don’t want to be perceived as the hardest-working person in the office (low status) or the least hard-working (low commitment). Being “the Office X” for any X is undesirable; it’s OK to be liberal (or conservative, or Christian, or atheist, or feminist) and known for it, but you don’t want to be the Office Liberal, or the Office Conservative, or the Office Christian, or the Office Atheist, or the Office Feminist. Likewise, you never want to be the Office Slacker or the Office Workhorse. So on the rare day that I do need to slack, I up-moderate the appearance of working hard and do a couple tasks on my secret backlog of things that look hard but only take a couple minutes; and when I am working harder than anyone else, I down-moderate that appearance so that whatever I achieve seems more effortless than it actually was, because visible sacrifice or extreme effort might make one a “team player” but it’s a low-status move. That said, even if my work effort were exactly socially optimal (75th percentile in a typical startup, or 50 hours per week) I would still want uncertainty about how much I’m working. Let’s say that 10 hours per day is the socially optimal work effort and I’m working exactly that. Still, if anyone else knows that I’m working exactly that much, then I utterly lose, status wise, compared to the “wizard” who works the exact same amount but has completely avoided visibility into his work and might be working 3 hours per day and might be working 17. Being “pinpointed”, even if you’re at the exact right spot, makes you a loser. That’s why I hate “Agile” regimes that are designed to pinpoint people. Ask around about the work effort of a high-status person (like a CEO) and, because he’s not pinpointed, people will see what they want to see. Those who value effort will perceive an all-in hard worker, while those who admire talented slackers will see her as a supremely efficient “10x” sort of person.

This is what young people generally don’t get– and that older people usually understand through experience, making them less of a “culture fit” for the more cultish companies– about “Agile” and open-plan offices and violent transparency. Allowing extreme visibility into your work, as the “Agile” fad that is destroying software engineering demands, makes you cool. It makes you well-liked and affable. However, it disempowers you, even if your work commitment is exactly the socially optimal amount. It makes you a pretty little boy (or girl); not a man or woman. It makes you “a culture fit” but never a culture maker.

When you let people know how hard (or how little) you work, you’re giving away critical information and getting nothing in return. How little or how much you work can always be used against you; if you visibly work hard, people might see your efforts as unsustainable; they might distrust you on the suspicion that you have ulterior motives, like a rogue trader who never takes vacation; they might start tossing you undesirable grunt work assuming you’ll do it with minimal complaint; or they might think that you’re incompetent and have to work long hours to make up for your lack of natural ability. If you’re smart, you keep that information close to your chest. Just as your managers and colleagues should know nothing about your sex life, whether you’re having a lot of sex or none or an average amount; they should not know how many hours you’re actually working, whether you’re the biggest slacker or the hardest worker or right in the middle.

The most powerful statements that a person makes are what she gives, expecting nothing in return. It is not always a low-status move to do give something and ask for nothing back. Giving people no-strings gifts that help them and don’t hurt you is not just ethically good, but it also improve your status by showing that you have good judgment. Giving people gifts that don’t help them, but that hurt you, either supplicates you or shows that you have terrible judgment. No one gains anything meaningful when you provide Agile-style micro-visibility into your work– executives don’t make better decisions, the team doesn’t gel any better– but you put yourself at unnecessary political risk. You’re hurting yourself “for the team” but the team doesn’t actually gain anything other than information it didn’t ask for and can’t use (unless someone intends to use it politically, and possibly against you). By doing this, you signify yourself as the over-eager sort of person who unintentionally generates political messes.

The open-plan office is cool but lowers one’s status. That said, cubicles are probably worse: low status and uncool. Still, I’d rather have a private office: uncool and middle-aged, but high in status. Private space means that your work actually matters.

“I don’t care what other people think about me”

One of my favorite examples of the divergence between what is cool and what is powerful is the statement, “I don’t care what other people think about me”. It’s usually a dishonest statement. Why would anyone who means it, say it? It’s also a cool statement. Cool people don’t care (or, more accurately, don’t seem to care) what is thought about them. However, it’s disempowering. Let’s transform it into something equivalent: “I don’t care about my reputation“. That’s not so much a “cool” statement as a reckless one. Reputation has a phenomenal impact on a person’s ability to be effective, and “I don’t care if I’m effective” is a loser’s statement. And yet, reputation is, quite exactly, what others think about a person. So why is one equivalent statement cool, and the other reckless?

Usually, people who say, “I don’t care what you think about me” are saying one of two things. The first is a fuck-you, either to the person or to the masses. Being cool is somewhat democratic; it’s about whether you are popular, seen as attractive, or otherwise beloved by the masses. Appealing to power is not democratic; most peoples’ votes actually don’t count for much. (Of course, if you brazenly flip off the masses, you might offend many people who do matter, so it’s not advisable in most circumstances.) The 24-year-olds in the open-plan office who play foosball from noon till 9:00 can decide if you’re cool, but they have no say in what you’re paid, how your work is evaluated, or whether you’re promoted or fired. It’s better to have them like you than to be disliked by them, but they’re not the ones making decisions that matter. So, a person who says, “I don’t care what you think about me” is often saying, “your vote doesn’t matter”. That’s a bit of a stupid statement, because even other prole-kickers don’t like the brazen prole-kickers.

The second meaning of “I don’t care what you think about me” is “I don’t care if you like me“. That’s fundamentally different. Personally, I do care about what people think of me. Reputation is far more powerful a factor in one’s ability to be effective in anything involving other humans than is individual capability. A reputation for competence is crucial. However, I don’t really care that much about being liked. I don’t want to be hated, but there’s really no difference between being mildly disliked by someone who’d never back me in a tight spot and being mildly liked by a person who’d never back me in a tight spot. It’s all the same, along that stretch: don’t make this person as an enemy, don’t trust as a friend.

Machiavelli was half-right and half-wrong with “It is better to be feared than loved.” It is not very valuable to be vacuously “loved”, as “scrappy startups” often are. His argument was that beloved princes are often betrayed– and we see that, all the time, in Silicon Valley– whereas feared princes are less likely to be maltreated. This may apply to Renaissance politics, that period being just as barbaric (if not moreso) as the medieval era before it; but I don’t think that it applies to modern corporate politics. Being loved isn’t valuable, but being feared is detrimental as well. You don’t get what you want through fear unless what you want is to be avoided and friendless.

It is better to be considered competent than to be feared or loved. Competent at what? That, itself, is an interesting question. Take my notes, above, on why it is undesirable to provide visibility into how hard you work. If you’re a known slacker who, coasting on natural ability or acquired expertise, gets the job done and does it well, you’ve proven your competence at the job, but you’ve shown social incompetence, by definition, because people know that you’re working less hard than the rest of the team. Even if no one resents you for it, the fact that people have this information over you is one that lowers your status. Likewise, if you’re to be a reliable hard worker, you’ve shown competence at self-control and in focus; but, yet again, the fact that people know that you work longer hours than everyone else shows a bit of social incompetence. The optimal image, in terms of where you are on the piker-versus-workhorse continuum, is to be high enough in status that others’ image of you is exactly what you want it to be. I would say, then, that wants to be seen as being competent at life. It is not enough to be competent only at the job; that keeps you from getting fired, but it won’t get you promoted.

Of course, the idea that there’s such a thing as “competent at life” is ridiculous. I’m highly competent at most things that I do, but if I somehow got into professional football, I’d be one of the worst incompetents ever seen. “Competent at life” is an image, if not a hard reality. There probably isn’t such a thing, because for anyone there is a context that would humiliate that person (for me, professional football). That said, there are people who have the self-awareness and social acumen to make themselves most visible in contexts where they are the most competent (and have moments of incompetence, such as when learning a new skill, in private) and there are others who don’t. It’s best to be in the former group and therefore create the image (since there is no such reality) of universal competence.

It is better to be thought competent than to be loved or to be feared. If you are beloved but you are not respected and you are not trusted to be competent, you can be tossed aside in favor of someone who is prettier or more charismatic or younger or cooler or “scrappier” and more of an underdog; and, over time, the probability of that happening approaches one. People will feel a mild desire to protect you, but no one will come to you for protection. This is of minimal use, because the amount of emotional energy that powerful people have to expend in the protection of others is usually low; the mentor/protege dynamic of cinema is extremely rare in the real world; most people with actual power were self-mentoring. However, if you are feared, that doesn’t necessarily mean that you’ll be respected or seen as capable. Plenty of people are feared because they’re catastrophically incompetent. You’re much more likely to be avoided, isolated, and miserable, than to get your way through fear. Furthermore, it’s often necessary that blatant adversaries (i.e. someone who might damage your reputation or career to bolster his, or to settle a score) be intimidated, but you never want them to be frightened. An intimidated adversary declines to fight you, which is what you want; a frightened or humiliated one might do any number of things, most of which are bad for all parties.

Cool can disempower

It is not always undesirable to be cool or popular. Depending on one’s aims, it might be useful. Very young people are almost never powerful, and will have more fun if they’re seen as cool than if not. When you’re 17, teachers and parents and admissions officers (all uncool) have the power, so there’s a side game that’s sexier and more accessible. When you’re 23, being “cool” can get you a following and venture funding and turn you from yet-another app developer to a “founder” overnight. There is, however, a point (probably, in the late 20s) at which cool becomes a childish thing to put away. If you work well in a Scrum environment, that might make you “cool” in light of current IT fads, but it ultimately shows that you’ve excelled at subordination, which does not lend you an aura of power. (“Agile: How to be a 10X Junior Developer.”)

I am, of course, not saying that being likeable or cool are, ever, bad things. All else considered, it’s better to have them than not. They just aren’t very useful. They aren’t the same thing as status or power, and sometimes one must be chosen or the other. Open-plan culture and the startup world fetishize coolness and distract people from the uncool but important games that they’ll have to play (and get good at) in order to become adults. Ultimately, having a reputation for professional competence and being able to afford nice vacations is just more important than being considered “cool” by people who won’t remember one’s name in 10 years. At some point, you realize that it’s more important to have a career that’ll enable you to send your kids to top schools than to win the approval of people who are just a year and a half out of those schools. The “sexiness” of the startup cult seems to derive itself from those who haven’t yet learned this.

Software’s management problem

Yesterday, I posted a list of the failings of Scrum and the “Agile” fad, and the reviews have been mixed. To be honest, I find the topic of Agile rather boring. I recoil when I encounter it, because it saddles engineers with a bunch of nonsense that has nothing to do with computer science or software engineering, but the more central topic is the fact of an industry that has become really bad at management. “User stories” are a symptom, but the root problem is much deeper.

It’s easy to complain about incompetent managers or “management culture” and make fun of foolish executives when their egos cause them to flush millions of dollars in value down the toilet, but that’s fundamentally an immature person’s game. It’s much more fruitful to look into the soul of a craft or an industry, such as computer programming, beset with open-plan offices and user stories and ask, “How the fuck did this shit come into being?” It didn’t happen in a day, and it wasn’t by accident.

So why is software management typically so bad? What about our industry causes it to be poorly managed? And what can we learn from it, in order to do better?

“Everyone hates” middle management, but it’s important

Middle managers take a lot of flak from above and below, and the stock image of a middle manager isn’t a pleasant one. Whether it’s the horrendous Bill Lumbergh of Office Space, the bumbling Michael Scott in the U.S. version of The Office, or his nastier U.K. counterpart, David Brent; the image of middle management is a deeply negative one: a petty tyrant without vision, or an incompetent lackey with the ruthlessess, but not the charisma, of an executive. This, I think, exists because of a perverse need for the low to identify with the high (royalism) through a shared contempt for the “bourgeois” middle. Often, middle managers get the brunt of the negativity and even blame, for example, for terrible decisions made by executives. Some dickhead higher-up decides impose stack-ranking, but it’s middle managers who get stuck having to fire people, and who end up being the most hated people in the whole row. It’s much easier to get the low to hate the one-rank-up less-low than to overcome their desire to identify with the high.

In truth, the executive/manager distinction is something that upper-tier professional managers (i.e. “executives”) invented for their own benefit, as a way of differentiating themselves from their lower-tier counterparts. Ultimately, the job title of manager isn’t very sexy. Traditionally, a manager is someone who makes decisions pertaining to an asset that someone else owns: a financial manager allocates a wealthy person’s funds, an actor’s manager is a subordinate who manages his reputation, and a corporate manager oversees the deployment of its labor and capital. As managers (or, to use a more icy term, “handlers”) make decisions over someone else’s assets, they’re often distrusted, because the bad ones do a lot of harm to the owners of those assets. A few are unethical, using their superior knowledge over what is being managed to further their own aims rather than the interests of the owners of the resources. Other managers are abandoned when politics turns against them. At any rate, the manager of a resource is officially subordinate to the person owning that resource, and those who choose to be insubordinate are (often rightly) viewed as unethical or even as crooks.

Upper-tier professional managers began to identify themselves as executives in order to get away from the image of a subordinate, claiming a special knowledge of how to run businesses and inventing demand for it. When the manager/executive distinction formed, and to a degree even now, it wasn’t intended to be one of hierarchical rank or pay grade, but of social status. Within a group, social status gives a person the right to define how he or she is evaluated. (In fact, one of my issues with Scrum is that, while it attempts to equalize, it does so by imposing the low-status treatment– frequent requests for estimates, mandatory visibility into one’s work progress, low allowance for self-directed work, emphasis on measurability over quality– on the entire team.) A manager is responsible for putting a defined set of resources (including, and most often in the corporate setting, people) to defined tasks. It’s measurable, and a measurable job is almost always of less status than an intangible one. The job of an executive is… well, unless you’re within that high-status group yourself, you wouldn’t understand it.

Managers have hiring and firing authority but don’t get to decide how they, themselves, are evaluated. Executives, in general, do have that freedom, because their jobs aren’t as rigidly defined. While an executive will often have people (a mix of managers, assistants, and possibly other executives) reporting to him, his job isn’t to impose certain rules or processes over those people. Rather, those people are provided to assist him, not as a “company-owned resource” that he must formally manage, but toward whatever assignment he has devised for himself. Executives can fail just as they may succeed, but they’re afforded the luxury of succeeding or failing on terms that they have set. That’s the perk (and, some would argue, the definition) of being an executive.

With all of this said, being a middle manager (i.e. a manager who does not have the social status of an executive) is a decidedly unsexy job. It’s a description by exclusion: it means that a person has the responsibility of organizing other people (and is therefore exposed to any risk in their performance, in addition to her own fluctuation) but not the social status or true power that would allow her to define her own job and process of evaluation. The fact that middle managers take bumps from below and above shouldn’t be surprising. Executives are only accountable for the upkeep of their own social status in order to remain in the in-crowd, and workers are only accountable for their own performance, but managers are accountable for the performance of many people including themselves. An executive can toss blame for a fuck-up (his or someone else’s) to someone else in the organization, but a manager is stuck with responsibility for her own fuck-ups and those that occur below her (in addition to any, from above, that are thrown onto her). In many organizations, middle managers are forced into being the hardest workers, having to clean up messes made by the minimum-effort players below them and the self-interested, aggressive, and often narcissistic power players above them.

The software industry has, over the decades, de-emphasized middle management. Recognizing it as a job that few people want, it’s factored it out into roles like the “product manager”, who may or may not have reports, the “software architect” (an important role but a dangerous title) and, in some cases, ill-advised pseudo-managerial positions like “scrotum master”. To a large degree, this change has been troubling, because the disappearance of middle management capability within software engineering has made a mess of the industry. Jobs, such as creating an inclusive culture rather than a “brogrammer” culture based on AMOGing, that were traditionally the province of middle management, go undone. Typically, there is no one given the authority to do them, and doing that kind of cultural or managerial work on one’s own initiative (i.e. without formal authority) is extremely dangerous, so people avoid it.

Against “flat hierarchy”

It may surprise people that, while I champion open allocation, I’m against so-called “flat hierarchy”. The two concepts, I think, are orthogonal even if often linked. Open allocation is the idea that programmers (and, perhaps, creative workers in general) ought to be rewarded and encouraged to find more profitable uses of their time and energy. While “engineers get to work on whatever they want” isn’t an effective management strategy, I support removing authoritarian obstacles (e.g. headcount limitations, rigid job descriptions, time-in-grade mandates like Google’s “18-month rule”) that prevent them from taking initiative and enhancing their value to the company by working on something more important than stated executive priorities. It’s not that I think engineers should be able to work on whatever they want; only that they should have the right to allocate their time to existing corporate needs without being required to appeal to a specific person first. Open allocation is about equality of opportunity (i.e. you won’t be told that you can’t work on X because bullshit headcount limitations that are wielded against the politically less empowered) rather than anarchy. What I don’t think can work is “flat hierarchy” or “no middle managers”. It goes against human nature. While persistent hierarchies of people can be (and often are) toxic, we think in hierarchical terms. We group like things with like, we form taxonomies, and we understand complex systems by composing them into simpler ones… and that’s hierarchy. Once there is a certain amount of complexity in anything, humans will demand or impose a hierarchical model over it. This creates a need for people with the power and the social skills to ensure that the conceptual hierarchy is sound, and that any hierarchical relationships among people are congruent with it, and do not outlive their need without two-party consent to their continuance.

Sociologically, this means that most companies with “flat hierarchy” end up with an emergent hierarchy in spite of themselves. Plenty of self-fashioned benevolent executives wish to see a flat hierarchy below them, because it saves them from the onerous task of choosing (or asking the group to choose) formally titled middle managers, who might prove themselves untrustworthy or dangerous once given power. To her credit, the “benevolent executive” might listen equally to the “flat” array of people below her when there are four or six or possibly even fifteen. At fifty people, though, it’s almost certain that some people have a shorter, hotter line to her and her hiring, firing, allocating, and promoting authority. This means, in effect, that they’re now bosses. This is problematic, because the de facto middle managers who emerge, having no formal title or stability in position, have to compete with the rest of the group to hold status. A formally entitled manager, at least on paper, isn’t supposed to compete with his subordinates, because he’s evaluated on group achievement rather than personal glory. An informal manager, held in that position by a perception (not always reality!) of superior individual performance, is required to use that informal power to maintain said superiority, to the detriment of the less powerful.

Furthermore, as I’ve already addressed, there are jobs that only middle managers will do, because either no one wants to do them, or because it is dangerous to do them without formal authority. Conflict resolution is a big one, but there are subtler cultural jobs that emerge. Who tells the young “rock star” with a horrid personality that he needs to stop calling co-workers “pussies”, because even if the young men in his group venerate him and don’t mind him, the women who overhear it find it offensive? (The “benevolent despot” CEO is likely not around when this guy acts up.) Who decides which technical disputes (e.g. Haskell vs. Java) are important and which (tabs vs. spaces) are a waste of time? Most importantly, who mentors incoming talent and informs people of the existing political landscape (as one always exists) in order to prevent people from needlessly damaging their careers and reputations? Software engineers don’t like middle management because they don’t see what middle managers do when they do their jobs well, which is to remove politics. They’d rather pretend that they work in “politics-free” (or “meritocratic”) environments. This means that they are oblivious, because even when organizations run well, politics exists.

As a final side note on this topic, the term “political” (as a negative) is one I find irritating. When someone loses his driver’s license and is fined because he was driving drunk, that’s political, even if it’s what should happen. It’s an exercise of state power for group benefit, at individual expense. It’s politics working well. Complaints about office “politics” or past decisions being “political” are a half-handed way of saying that the decisions or environment are unfair and corrupt. I would prefer that people use those words to describe bad behavior. Don’t say that the decision was “political”; say that it was wrong and unfair.

This common conflation– of all forms of “getting political” with ill-intended political behaviors– causes harm to those who are political toward beneficial ends. One of the most time-honored way for the elite to exercise control over the middle class is to encourage in them a general aversion to “being political”. Thus you hear statements like, “I support equality for women, but I’d never become feminist or get political about it” from well-intended, if somewhat oblivious, middle-class professionals. The corporate upper class can’t say what it means– “We don’t want unions, and even professional guilds we find irritating”– so they say, “Don’t get political”, which sounds enough like the middle-middle-class “Don’t discuss politics or religion at the dinner table” to get a pass. This aversion to “being political” has become so ingrained in software engineering culture that we’ve lost sight of our need for people who are skilled at navigating and mitigating the politics that emerges naturally in groups of people. As a result, we’re becoming one of the most sexist, ageist, and classist industries in the white-collar world. Moreover, we’re still as “managed” as we would be in a more traditional environment. We still face business-driven engineering (as Agile is an attempt to “patch” business-driven engineering, despite the latter being intractably broken) and low social status and widespread incompetence in management. The fact that there are well-studied systemic reasons for technology executives to be, on average, low-quality people doesn’t help either.

Between 1997 and 2015, we’ve seen a lack of desire to fix these problems, because technology has been such a high-margin industry that it’s been able to cover up egregious mismanagement. You don’t see open-plan offices and Scrum in high-profit companies because those practices work; you see them because, if you learned how to make great phones 10 years ago or a leading search engine 18 years ago, you can have terrible management practices and still be so profitable that open-plan offices won’t outright kill you… for a while. I prefer to see the proliferation of mismanagement in software as a positive sign. If our industry is this profitable despite having emasculated the concept of middle management, and further in spite of having drooling, non-technical morons (“failed in private equity, try bossing nerds around”) leading its upper ranks, then what might it be able to do if it were properly run? I’ll answer this: a lot more.

Protect, direct, and eject

So what makes a good manager? To make it clear, the notion of being an executive and of being a manager are orthogonal ones. There are managers, there are executives, and there are managers of executives. The principles that I’m getting into here apply both inside and outside of the executive in-crowd.

Hence we can focus the core job of any manager: protect, direct, and eject. The best subordinates will need to spread their wings in order to remain engaged in the work, but they need to be protected from the political minefields that overperformers often unknowingly enter, and they need to be insulated from executives– in particular, given guidance about which higher-ups are friendly and which ones are toxic malefactors on power trips. With the strongest subordinates, it’s almost a guarantee that they’ll want to take on bigger challenges, and the manager’s job is to protect them politically while they do so. The middling subordinates, who tend to show less initiative and confidence, need to be directed: assigned useful work in order to earn their keep. Finally, if there are negative-impact members of the team, and as a last resort if they cannot be improved, they must be removed from the team (“eject”). There aren’t static percentages that apply to these three jobs, and people can move from one category into another. Ideally, a manager would seek to reform low performers before ejecting them, and “direct” the middling performers toward work that improves their skills and engages them more, thus bringing them into the “protect” group. In the ideal world, the manager’s job is 100% “protect”, because I don’t believe that people are intrinsically disengaged and mediocre (i.e. need to be directed). In the real world and on an average team, it’s probably 35% protect, 63% direct, and 2% eject.

Which of these three jobs do managers prefer? Where is their bias? Is it toward directing or protecting? I may be going out on a limb here, but I think that almost no one enjoys the “eject” job. It’s horrible to have to fire someone. It deserves to be done as a last resort, and while some bemoan that the decision to fire someone is often procrastinated, I prefer for it to be that way than for firing (especially if one cannot afford a generous severance) to be taken lightly. Where I think there is confusion about the manager’s role is between the other two jobs: direct versus protect. People who excel at one of these jobs tend to be bad at the other. Mediocre managers tend to manage up and to the short term, which favors the “direct” job: get the executives what they want, quickly and with no hiccups. Good managers tend to favor the long term and recognize the value of rapport with their best subordinates, so they’re willing to take on the “protect” job: expending effort and political capital to protect the good.

It’s probably not surprising that, over time, the most talented managers end up with the most talented subordinates, and vice versa. In the “protect, direct, eject” framework, this makes sense. The best managers generally want to be protectors and mentors, and they get their pick of people reporting to them, and they get people who don’t need to be told what to do so much as protected from unintended consequences of over-performance. Mediocre managers tend to be “direct”-heavy, and end up with people who need to directed. Finally, the worst managers tend to be “ejectors” (either because they lack the political clout to keep their employees’ jobs, or because they toss blame for their own mistakes, or even because they enjoy firing people) and would be predicted to end up with the worst subordinates, where their job seems to be implementing their terminations (a job that no one wants to do). This seems like a efficient arrangement, though. Shouldn’t talented subordinates be mapped to protectors and mediocre ones be mapped to directors. So what goes wrong?

The first issue is the lack of self-correction. Every system that must evaluate people makes errors, especially if it does so early on (as in the case of most companies, which assign managers on the first day). Moreover, people change over time. I don’t believe that there are people who are inflexibly of the “needs to be directed” or “needs to be ejected” type; people are mostly context, and impressively capable at improving themselves given the right opportunity and encouragement. Managers who are oriented toward directing (i.e. they see their job as telling people what to do) are likely to end up in conflict with strong, independent subordinates. That doesn’t end well. It also goes poorly when capable people are placed under ejectors, as can happen early in their careers. This is what’s behind the Welch Effect: the people most likely to be fired under stack ranking are junior members of underperforming teams (i.e. teams run by ejectors) and it is pathological because, being junior, they typically had the least to do with the team’s underperformance; they’re essentially fired at random.

Reading my assessment, it probably seems obvious that most people are going to consider themselves (as reports into a manager) as being in the “protect” category. Few will admit that they belong in the “direct” category as if it were a native trait of them, and I agree that it’s not a native trait. More often, it comes down to relative priorities. Some people want to take on bigger challenges and need the political support (and protection) that will enable them to do so. Others are happy being told what to do, so long as they can go home at 5:00 and their job duties don’t change too frequently or in an adverse way. Not everyone values autonomy, and thats OK. There’s nothing wrong with either approach to work, and those who prioritize comfort over achievement (which is completely reasonable, especially in an environment where achievement won’t be rewarded commensurate with the compromise of comfort) are often valuable people to have, so long as they’re properly directed. It’s not that such people are inflexibly in a “mediocre worker” category that requires them to be directed more than protected. It’s that their needs and capabilities at a certain time put them there, although a good manager will try to direct them, if they wish to go toward it, up to a higher level of competence; i.e. the “protect” group.

There is, however, a numerical mismatch between the subordinates who are better off protected than directed, and the inclinations of middle managers as a category: there are more talented subordinates (the “protect” category) than managers who view themselves primarily as protectors, and fewer mediocre subordinates (the “direct” category) than managers inclined to direct. Because managers are often rewarded for managing up, and because most corporate executives are in positions with no accountability, those who are picked for the management track are often those who are inclined to direct rather than they are those who would protect. This, I think, is why middle management gets such a bad name: it’s associated with those who value control over achievement. As I recounted in the last essay, there’s a selection process that favors negative traits. Middle management is often defective because the traits that make a person good at managing up are a readiness to control others and to furtively oppose the interests of one’s subordinates. This results in a category of people who are unduly inclined to distrust (and “direct”) while offering little or none of the protection or opportunity that would enable them to manage high performers. In fact, to many of them, the idea that they should protect their subordinates is a foreign concept: as they see it, their subordinates exist to serve them. Of course, it’s not just the incompetence of many in the role that lead to middle management’s negative reputation. As I discussed, front-line managers are often “fall guys” for executive malfeasance and incompetence, as their lack of executive-level social status makes them easy to scapegoat. Peoples’ desire to identify with power (which middle managers often don’t have, while the executives do) leaves them more than ready to dislike their immediate bosses over failures that are actually the fault of higher-ranking people.

Scrum and software management

The software industry has been trying to disintermediate middle management for decades, to mostly negative results. I’ll readily agree that middle management is often a weak link in organizations, and that the quality of people tapped for it is sometimes low (but not as low as that of the people who fail out of private equity and into investor- or executive-level positions in tech). Even still, middle management is often necessary. The job is important. Someone has to protect new talent from the sharp knives of executives and other managers, direct the middling performers so the company functions, and eject the rare person whose behavior is so toxic as to threaten the functioning of the organization. These jobs can’t be delegated to a “self-managing” (or, just as often, emergently managed) group. Protecting is a job that no one in a “flat” organization has the authority to take on, except for the executives (you know, the people who keeps the hierarchy flat and signs the paychecks) who are often too far removed from the bottom to do it. Directing, on the other hand, becomes a job that many people try to do; without clear leaders of the group, you get many who think they’re the leaders and will try to tell others what to do. The long term result of this is incoherence and tension, until the pushiest people gain credibility (usually by taking credit for others’ work) and win favor from above, and become de facto managers. Finally, ejecting is a job that either no one does (because it’s undesirable, except for psychopaths) or that attracts the worst kinds of people, and is then done in a toxic way.

Where do Scrum and Agile fit into all of this? Naively, they appear like a mechanism to remove middle managers from the equation, or push “people managers” off to the side. I’ll certainly agree that there’s a noxious, deep conflict of interest between people management and project (or product) management, because what is best for those being “people-managed” might be bad for the project (i.e. for a talented subordinate to transfer to a team more in line with his interests is something that a middle manager, held accountable for delivery on that team’s project rather than excellent people-managing, might be averse to letting happen). Many middle managers abuse their power, as a single-point-of-failure (SPOF) in a report’s career at the company, in order to get project-related goals (because, typically, they’re evaluated based on visible deliverables rather than effective people managing) done through intimidation, and I think that has led a couple generations of software engineers to conclude that most middle managers are worthless parasites who only manage up. The problem, however, has more to do with how those managers are evaluated (i.e. their need to “manage up”) and that it forces them to favor directing over protecting.

Despite the flaws of the former, when you replace the institution of middle management with “Agile” or Scrum and the illusion of “flat hierarchy”, you rarely get an improvement. Instead, you get emergent middle management and unexpected, unstable power centers. Agile and Scrum ignore, outright, the goal of protecting subordinates (or, sorry, “Scrum team members”). In fact, the often-stated point of the “user stories” and violent transparency and superficial accountability is to make it impossible for middle managers to protect their reports. Agile is, foremost, about directing and ejecting, but it replaces the single higher-up tyrant with a hundred mildly tyrannical neighbors. The formal police state vanishes in favor of a fuck-your-buddy system where instead of having one career-SPOF in a middle manager, you have tens of career SPOFs. The actual effect of this is to delegate the job that ruined middle management– that of”managing up” into executives– to the whole team.

The big problem with this change is that many executives are narcissistic assholes: probably 30 to 50 percent do more harm than good. It comes with the job. As I mentioned earlier, managers have a job (to manage) while executives are effectively unaccountable, because an executive’s real job is to maintain the social status that places them within the corporation’s nobility. So, companies get a mix of good and bad executives and are rarely very swift in removing the bad ones. A good manager with institutional knowledge will know which executives are friendly and which ones are toxic and best avoided. She’ll steer her reports toward interactions with the good executives, and thereby improve her ability and the ability of her team to get things done, while shooing them away from the bad executives who’ll meddle or do worse. Take away that middle manager and replace her with over-eager, politically clueless young developers on a “Scrum team”, and you get chaos. You get no one looking out for that team, because no one can look out for that team. From above, the team is exposed, not protected.

It’s superficially appealing to throw middle managers overboard. It’s a tough job and a thankless one and there are a huge number of people out there doing it badly. The whole “startup” craze (in which young people have been led to overvalue jobs at companies whose middling tier is comprised of “founders” managing up into VCs, rather than traditional middle managers) is based on this appeal: why work at Goldman Sachs and report to “some faceless middle manager” when you can join a startup with a “flat hierarchy” and report directly to the CTO (…and, in truth, have your career dictated by a 27-year-old product manager whom the CTO has known for 8 years and whom he trusts more than he will ever trust you)? I also think that the reflexive rejection of the idea of middle management– why it exists, why it is important, and why it deserves so much more respect than it is given when it is done well– needs to be tossed aside. We haven’t figured out a way to replace it, and the odds are that we won’t do so in this generation. What we do know is that these asinine “methodologies” that trick a team into middle-managing itself (poorly) have got to go.

Conclusion

Middle management is, perhaps surprisingly, both the solution and the problem. It exists. It always will exist. Executives are disinclined to “protect the good” within the organization, and too removed from day-to-day problems to evaluate individual people, in any case. Even at modest size for an organization, the necessary jobs of management– protect, direct, and (as a last resort) eject– cannot be handled by a single person or an executive in-crowd. Pretending that management is an outmoded job is something we do at our peril. Yes, we must acknowledge that it has mostly been done poorly in software for the past 20 years (and possibly for much longer). However, rather than declaring the whole concept obsolete, we have to acknowledge that it is a necessary function– and figure out how to get good at it.

Why “Agile” and especially Scrum are terrible

Follow-up post: here

It’s probably not a secret that I dislike the “Agile” fad that has infested programming. One of the worst varieties of it, Scrum, is a nightmare that I’ve seen actually kill companies. By “kill” I don’t mean “the culture wasn’t as good afterward”; I mean a drop in the stock’s value of more than 85 percent. This shit is toxic and it needs to die yesterday. For those unfamiliar, let’s first define our terms. Then I’ll get into why this stuff is terrible and often detrimental to actual agility. Then I’ll discuss a single, temporary use case under which “Agile” development actually is a good idea, and from there explain why it is so harmful as a permanent arrangement.

So what is Agile?

The “Agile” fad grew up in web consulting, where it had a certain amount of value: when dealing with finicky clients who don’t know what they want, one typically has to choose between one of two options. The first is to manage the client: get expectations set, charge appropriately for rework, and maintain a relationship of equality rather than submission. The second is to accept client misbehavior (as, say, many graphic designers must) and orient one’s work flow around client-side dysfunction. Programmers tend not to be good at the first option– of managing the client– because it demands too much in the way of social acumen, and the second is appealing to a decision-maker who’s recently been promoted to management and won’t have to do any of the actual work.

There’s a large spectrum of work under the name of “consulting”. There are great consulting firms and there are body shops that taken on the lowest kind of work. Companies tend to give two types of work to consultancies: the highest-end stuff that they might not have the right people for, and the low-end dreck work that would be a morale-killer if allocated to people they’d actually like to retain for a year or few. Scrum is for the body shops, the ones that expect programmers to suffer when client relationships are mismanaged and that will take on a lot of low-end, career-incoherent work that no one wants to do.

So what are Scrum and “Agile”? I could get into the different kinds of meetings (“retrospective” and “backlog grooming” and “planning”) or the theory, but the fundamental unifying trait is violent transparency, often one-sided. Programmers are, in many cases, expected to provide humiliating visibility into their time and work, meaning that they must play a side game of appearing productive in addition to their actual job duties. Instead of working on actual, long-term projects that a person could get excited about, they’re relegated to working on atomized, feature-level “user stories” and often disallowed to work on improvements that can’t be related to short-term, immediate business needs (often delivered from on-high). Agile eliminates the concept of ownership and treats programmers as interchangeable, commoditized components.

In addition to being infantilizing and repellent, Scrum induces needless anxiety about microfluctuations in one’s own productivity. The violent transparency means that, in theory, each person’s hour-by-hour fluctuations are globally visible– and for no good reason, because there’s absolutely no evidence that any of this snake oil actually makes things get done quicker or better in the long run. For people with anxiety or mood disorders, who generally perform well when measured on average long-term productivity, but who tend to be most sensitive to invasions of privacy, this is outright discriminatory.

Specific flaws of “Agile” and Scrum

1. Business-driven engineering.

“Agile” is often sold in comparison to an equally horrible straw man approach to software design called “Waterfall”. What Waterfall and Agile share (and a common source of their dysfunction) is that they’re business-driven development. In Waterfall, projects are defined first by business executives, design is done by middle managers and architects, and then implementation and operations and testing are carried out by multiple tiers of grunts, with each of these functions happening in a stage that must be completed before the next may begin. Waterfall is notoriously dysfunctional and no Agile opponent would argue to the contrary. Under Waterfall, engineers are relegated to work on designs and build systems after the important decisions have all been made and cannot be unmade, and no one talented is motivated to take that kind of project.

Waterfall replicates the social model of a dysfunctional organization with a defined hierarchy. The most interesting work is done first and declared complete, and the grungy details are passed on to the level below. It’s called “Waterfall” because communication goes only one way. If the design is bad, it must be implemented anyway. (The original designers have probably moved to another project.) Agile, then, replicates the social model of a dysfunctional organization without a well-defined hierarchy. It has engineers still quite clearly below everyone else: the “product owners” and “scrum masters” outrank “team members”, who are the lowest of the low. Its effect is to disentitle the more senior, capable engineers by requiring them to adhere to a reporting process (work only on your assigned tickets, spend 5-10 hours per week in status meetings) designed for juniors. Like a failed communist state that equalizes by spreading poverty, Scrum in its purest form puts all of engineering at the same low level: not a clearly spelled-out one, but clearly below all the business people who are given full authority to decide what gets worked on.

Agile increases the feedback frequency while giving engineers no real power. That’s a losing bargain, because it means that they’re more likely to jerked around or punished when things take longer than they “seem” they should take. These decisions are invariably made by business people who will call shots based on emotion rather than deep insight into the technical challenges or the nature of the development.

Silicon Valley has gotten a lot wrong, especially in the past five years, but one of the things that it got right is the concept of the engineer-driven company. It’s not always the best for engineers to drive the entire company, but when engineers run engineering and set priorities, everyone wins: engineers are happier with the work they’re assigned (or, better yet, self-assigning) and the business is getting a much higher quality of engineering.

2. Terminal juniority

“Agile” is a culture of terminal juniority, lending support to the (extremely misguided) conception of programming as a “young man’s game”, even though most of the best engineers are not young and quite a few are not men. Agile has no exit strategy. There’s no “We won’t have to do this once we achieve ” clause. It’s designed to be there forever: the “user stories” and business-driven engineering and endless status meetings will never go away. Architecture and R&D and product development aren’t part of the programmer’s job, because those things don’t fit into atomized “user stories” or two-week sprints. So, the sorts of projects that programmers want to take on, once they master the basics of the field, are often ignored, because it’s either impossible to atomize them or it’s far more difficult to do so than just to do the work.

There’s no role for an actual senior engineer on a Scrum team, and that’s a problem, because many companies that adopt Scrum impose it on the whole organization. Aside from a move into management, there is the option of becoming a “Scrum master” responsible for imposing this stuff on the young’uns: a bullshit pseudo-management role without power. The only way to get off a Scrum team and away from living under toxic micromanagement is to burrow further into the beast and impose the toxic micromanagement on other people. What “Agile” and Scrum say to me is that older, senior programmers are viewed as so inessential that they can be ignored, as if programming is a childish thing to be put away before age 35. I don’t agree with that mentality. In fact, I think it’s harmful; I’m in my early 30s and I feel like I’m just starting to be good at programming. Chasing out our elders, just because they’re seasoned enough to know that this “Agile”/Scrum garbage has nothing to do with computer science and that it has no value, is a horrible idea.

3. It’s stupidly, dangerously short-term. 

Agile is designed for and by consulting firms that are marginal. That is, it’s for firms that don’t have the credibility that would enable them to negotiate with clients as equals, and that are facing tight deadlines while each client project is an existential risk. It’s for “scrappy” underdogs. Now, here’s the problem: Scrum is often deployed in large companies and funded startups, but people join those (leaving financial upside on the table, for the employer to collect) because they don’t want to be underdogs. No one wants to play from behind unless there’s considerable personal upside in doing so. “Agile” in a corporate job means pain and risk without reward.

When each client project represents existential or severe reputational risk, Agile might be the way to go, because a focus on short-term iterations is useful when the company is under threat and there might not be a long term. Aggressive project management makes sense in an emergency. It doesn’t make sense as a permanent arrangement; at least, not for high-talent programmers who have less stressful and more enjoyable options.

Under Agile, technical debt piles up and is not addressed because the business people calling the shots will not see a problem until it’s far too late or, at least, too expensive to fix it. Moreover, individual engineers are rewarded or punished solely based on the completion, or not, of the current two-week “sprint”, meaning that no one looks out five “sprints” ahead. Agile is just one mindless, near-sighted “sprint” after another: no progress, no improvement, just ticket after ticket.

4. It has no regard for career coherency.

Atomized user stories aren’t good for engineers’ careers. By age 30, you’re expected to be able to show that you can work at the whole-project level, and that you’re at least ready to go beyond such a level into infrastructure, architecture, research, or leadership. While Agile/Scrum experience makes it somewhat easier to get junior positions, it eradicates even the possibility of work that’s acceptable for a mid-career or senior engineer.

In an emergency, whether it’s a consultancy striving to appease an important client or a corporate “war room”, career coherency can wait. Few people will refuse to do a couple weeks of unpleasant or career-incoherent work if it’s genuinely important to the company where they work. If nothing else, the importance of that work confers a career benefit. When there’s not an emergency, however, programmers expect their career growth to be taken seriously and will leave. Using “fish frying” as a term-of-art for grunt work that no one enjoys, and that has no intrinsic career value to any one, there’s enough career value (internal and external to the organization) in emergency or high-profile fish frying that people don’t mind doing it. You can say, “I was in the War Room and had 20 minutes per day with the CEO” and that excuses fish frying. It means you were valued and important. Saying, “I was on a Scrum team” says, “Kick me”. Frying fish because you were assigned “user stories” shows that you were seen as a loser.

5. Its purpose is to identify low performers, but it has an unacceptably false positive rate. 

Scrum is sold as a process for “removing impediments”, which is a nice way of saying “spotting slackers”. The problem with it is that it creates more underperformers than it roots out. It’s a surveillance state that requires individual engineers to provide fine-grained visibility into their work and rate of productivity. This is defended using the “nothing to hide” argument, but the fact is that, even for pillar-of-the-community high performers, a surveillance state is an anxiety state. The fact of being observed changes the way people work– and, in creative fields, for the worse.

The first topic coming to mind here is status sensitivity. Programmers love to make-believe that they’ve transcended a few million years of primate evolution related to social status, but the fact is: social status matters, and you’re not “political” if you acknowledge the fact. Older people, women, racial minorities, and people with disabilities tend to be status sensitive because it’s a matter of survival for them. Constant surveillance into one’s work indicates a lack of trust and low social status, and the most status-sensitive people (even if they’re the best workers) are the first ones to decline.

Scrum and “Agile” are designed, on the other hand, for the most status-insensitive people: young, privileged males who haven’t been tested, challenged, or burned yet at work. It’s for people who think that HR and management are a waste of time and that people should just “suck it up” when demeaned or insulted.

Often, it’s the best employees who fall the hardest when Agile/Scrum is introduced, because R&D is effectively eliminated, and the obsession with short-term “iterations” or sprints means that there’s no room to try something that might actually fail.

The truth about underperformers is that you don’t need “Agile” to find out who they are. People know who they are. The reason some teams get loaded down with disengaged, incompetent, or toxic people is that no one does anything about them. That’s a people-level management problem, not a workflow-level process problem.

6. The Whisky Googles Effect

There seems to be some evidence that Agile and Scrum can nudge the marginally incompetent into being marginally employable. I call this the Whisky Goggles Effect: it turns the 3s and 4s into 5s, but it makes you so sloppy that the 7s and 9s want nothing to do with you. Unable to get their creative juices flowing under aggressive micromanagement, the best programmers leave.

From the point of view of a manager unaware of how software works, this might seem like an acceptable trade: a few “prima donna” 7+ leave under the Brave New Scrum, while the 3s and 4s become just-acceptable 5s. The problem is that the difference between a “7” programmer and a “5” programmer is substantially larger than that between a “5” and a “3”. If you lose your best people and your leaders (who may not be in leadership roles on the org-chart) then the slight upgrade of the incompetents for whom Scrum is designed does no good.

Scrum and Agile play into what I call the Status Profit Bias. Essentially, many people in business judge their success or failure not in objective terms, but based on the status differential achieved. Let’s say that the market value of a “3” level programmer is $50,000 per year and, for a “5” programmer, it’s $80,000. (In reality, programmer salaries are all over the map: I know 3’s making over $200,000 and I know 7’s under $70,000, but let’s ignore that.) Convincing a “5” programmer to take a “3”-level salary (in exchange for startup equity!) is marked, psychologically, not as a mere $30,000 in profit but as a 2-point profit.

Agile/Scrum and the age discrimination culture in general are about getting the most impressive status profits, rather than actual economic profits. The people who are least informed about what social status they “should” have are the young. You’ll find a 22-year-old 6 who thinks that he’s a 3 and who will submit to Scrum, but the 50-year-old 9 is likely to know that she’s a 9 and might begrudgingly take 8.5-level conditions but is not about to drop to a 6. Seeking status profits is, however, extremely short-sighted. There may be a whole industry in bringing in 5-level engineers and treating (and paying) them like 4’s, but under current market conditions, it’s far more profitable to hire an 8 and treat him like an 8.

7. It’s dishonestly sold.

To cover this point, I have to acknowledge that the uber-Agile process known as “Scrum” works under a specific set of circumstances; the dishonest salesmanship is in the indication that this stuff works everywhere, and as a permanent arrangement.

What Scrum is good for

Before the Agile fad, “Scrum” was a term sometimes used for what corporations might also call a “Code Red” or a “War Room emergency”. This is when a cross-cutting team must be built quickly to deal with an unexpected and, often, rapidly-evolving problem. It has no clear manager, but a leader (like a “Scrum Master”) must be elected and given authority and it’s usually best for that person not to be an official “people manager” (since he needs to be as impartial as possible). Since the crisis is short-term, individuals’ career goals can be put on hold. It’s considered a “sprint” because people are expected to work as fast as they can to solve the problem, and because they’ll be allowed to go back to their regular work once it’s over.

There are two scenarios that should come to mind. The first is a corporate “war room”, and if specific individuals (excluding high executives) are spending more than about 6 weeks per year in war-room mode, than something is wrong with the company because emergencies ought to be rare. The second is that of a consultancy struggling to establish itself, or one that is bad at managing clients and establishing an independent reputation, and must therefore operate in a state of permanent emergency.

Two issues

Scrum and Agile represent acknowledgement of the idea that emergency powers must sometimes be given to “take-charge” leaders who’ll do whatever they consider necessary to get a job done, leaving debate to happen later. A time-honored problem with emergency powers is that they often don’t go away. In many circumstances, those empowered by an emergency see fit to prolong it. This is, most certainly, a problem with management. Dysfunction and emergency require more managerial effort than a well-run company in peace time.

It is also more impressive for an aspiring demagogue (a “scrum master”?) to be a visible “dragonslayer” than to avoid attracting dragons to the village in the first place. The problem with Scrum’s aggressive insistence on business-driven engineering is that it makes a virtue (“customer collaboration”) out of attracting, and slaying, dragons (known as “requirements”) when it might be more prudent not to lure them out of their caves in the first place.

“Agile” and Scrum glorify emergency. That’s the first problem with them. They’re a reinvention of what the video game industry calls “crunch time”. It’s not sustainable. An aggressive focus on individual performance, a lack of concern for people’s career objectives in work allocation, and a mandate that people work only on the stated top priority, all have value in a short-term emergency but become toxic in the long run. People will tolerate those changes if there’s a clear point ahead when they’ll get their autonomy back.

The second issue is that these practices are sold dishonestly. There’s a whole cottage industry that has grown up around teaching companies to be “Agile” in their software development. The problem is that most of the core ideas aren’t new. The terminology is fresh, but the concepts are mostly outdated, failed “scientific management” (which was far from scientific, and didn’t work).

If the downsides and upsides of “Agile” and Scrum were addressed, then I wouldn’t have such a strong problem with the concept. If a company has a team of only junior developers and needs to deliver features fast, it should consider using these techniques for a short phase, with the plan to remove them as its people grow and timeframes become more forgiving. However, if Scrum were sold for what it is– a set of emergency measures that can’t be used to permanently improve productivity– then there would be far fewer buyers for it, and the “Agile” consulting industry wouldn’t exist. Only through a dishonest representation of these techniques (glorified “crunch time”, packaged as permanent fixes) are they made salable.

Looking forward

It’s time for most of “Agile” and especially Scrum to die. These aren’t just bad ideas. They’re more dangerous than that, because there’s a generation of software engineers who are absorbing them without knowing any better. There are far too many young programmers being doomed to mediocrity by the idea that business-driven engineering and “user stories” are how things have always been done. This ought to be prevented; the future integrity of our industry may rely on it. “Agile” is a bucket of nonsense that has nothing to do with programming and certainly nothing to do with computer science, and it ought to be tossed back into the muck from which it came.

The tyranny of the friendless

I’ve written a lot about the myriad causes of organizational decay. I wrote a long series on the topic, here. In most of my work, I’ve written about decay as an inevitable, entropic outcome driven by a number of forces, many unnamed and abstract, and therefore treated as inexorable ravages of time.

However, I’ve recently come to the realization that organizational decay is typically dominated by a single factor that is easy to understand, being so core to human sociology. While it’s associated with large companies, it can set in when they’re small. It’s a consequence of in-group exclusivity. Almost all organizations function as oligarchies, some with formal in-crowds (government officials or titled managers) and some without. If this in-crowd develops a conscious desire to exclude others, it will select and promote people who are likely to retain and even guard its boundaries. Only a certain type of person is likely to do this: friendless people. Those who dislike, and are disliked by, the out-crowd are unlikely to let anyone else in. They’re non-sticky: they come with a promise of “You get just me”, and that makes them very attractive candidates for admission into the elite.

Non-stickiness is seen as desirable from above– no one wants to invite the guy who’ll invite his whole entourage– but, in the business world, it’s negatively correlated with pretty much any human attribute that could be considered a virtue. People who are good at their jobs are more likely to be well-liked and engaged and form convivial bonds. People who are socially adept tend to have friends at high levels and low. People who care a lot about social justice are likely to champion the poor and unpopular. A virtuous person is more likely to be connected laterally and from “below”. That shouldn’t count against a person, but for an exclusive club that wants to stay exclusive, it does. What if he brings his friends in, and changes the nature of the group? What if his conscience compels him to spill in-group secrets? For this reason, the non-sticky and unattached are better candidates for admission.

The value that executive suites place on non-stickiness is one of many possible explanations for managerial mediocrity as it creeps into an organization. Before addressing why I think my theory is right, I need to analyze three of the others, all styled as “The $NAME Principle”.

The “Peter Principle” is the claim that people are promoted up to their first level of incompetence, and stay there. It’s an attractive notion, insofar as most people have seen it in action. There are terminal middle managers who don’t seem like they’ll ever gain another step, but who play politics just well enough not to get fired. (It sucks to be beneath one. He’ll sacrifice you to protect his position.) That said, I find the Peter Principle, in general, to be mostly false because of its implicit belief in corporate meritocracy. What is most incorrect about it is the belief that upper-level jobs are harder or more demanding than those in the middle. In fact, there’s an effort thermocline in almost every organization. Above the effort thermocline, which is usually the de facto delineation between mere management roles and executive positions, jobs get easier and less accountable with increasing rank. If the one-tick-late-but-like-clockwork Peter Principle were the sole limiting factor on advancement, you’d expect that those who pass the the thermocline would all become CEOs, and that’s clearly not the case. While merit and hard work are required less and less with increasing rank, political resistance amplifies just because there are so few of the top jobs that there’s bound to be competition. Additionally, even below the effort thermocline there are people employed below their maximum level of competence because of political resistance. The Peter Principle is too vested in the idea of corporate meritocracy to be accurate.

Scott Adams has proposed an alternative theory of low-merit promotion: the Dilbert Principle. According to it, managers are often incompetent line workers who were promoted “out of harm’s way”. I won’t deny that it exists in some organizations, although it usually isn’t applied within critical divisions of the company. When incompetents are knowingly promoted, it’s usually a dead-end pseudo-promotion that comes with a small pay raise and a title bump, but lateral movement into unimportant work. That said, its purpose isn’t just to limit damage, but to make the person more likely to leave. If someone’s not bad enough to fire but not especially good, gilding his CV with a fancy title might invite him to (euphemism?) succeed elsewhere… or, perhaps, not-succeed elsewhere but be someone else’s problem. All of that said, this kind of move is pretty rare. Incompetent people who are politically successful are not known to be incompetent, because politics-of-performance outweighs actual performance ten-to-one in terms of making reputations, and those who have a reputation for incompetence are those who failed politically, and they don’t get exit promotions. They just get fired.

The general idea that people are made managers to limit their damage potential is false because the decision to issue such promotions is one that would, by necessity, be made by other managers. As a tribe, managers have far too much pride to ever think the thought, “he’s incompetent, we must make him one of us”. Dilbert-style promotions occasionally occur and incompetents definitely get promoted, but the intentional promotion of incompetents into important roles is extremely rare.

Finally, there’s the Gervais Principle, developed by Venkatesh Rao, which asserts that organizations respond both to performance and talent, but sometimes in surprising ways. Low-talent high-performers (“eager beavers” or “Clueless”) get middle management roles where they carry the banner for their superiors, and high-talent low-performers (“Sociopaths”) either get groomed for upper-management or get fired. High-talent high-performers aren’t really addressed by the theory, and there’s a sound reason why. In this case, the talent that matters most is strategy: not working hard necessarily, but knowing what is worth working on. High talent people will, therefore, work very hard when given tasks appropriate to their career goals and desired trajectory in the company, but their default mode will be to slack on the unimportant make-work. So a high-talent person who is not being tapped for leadership will almost certainly be a low performer: at least, on the assigned make work that is given to those not on a career fast track.

The Gervais/MacLeod model gives the most complete assessment of organizational functioning, but it’s not without its faults. Intended as satire, the MacLeod cartoon gave unflattering names to each tier (“Losers” at the bottom, “Clueless” in middle-management, and “Sociopaths” at the top). It also seems to be a static assertion, while the dynamic behaviors are far more interesting. How do “Sociopaths” get to the top, since they obviously don’t start there? When “Clueless” become clued-in, where do they go? What do each of these people really want? For how long do “Losers” tolerate losing? (Are they even losing?) Oh, and– most importantly for those of us who are studying to become more like the MacLeod Sociopaths (who aren’t actually sociopathic per se, but risk-tolerant, motivated, and insubordinate)– what determines which ones are groomed for leadership and which ones are fired?

If there’s an issue with the Gervais Principle, it’s that it asserts too much intelligence and intent within an organization. No executive ever says, “that kid looks like a Sociopath; let’s train him to be one of us.” The Gervais model describes the stable state of an organization in advanced decline, but doesn’t (in my opinion) give full insight into why things happen in the way that they do.

So I’m going to offer a fourth model of creeping managerial mediocrity. Unlike the Peter Principle, it doesn’t believe in corporate meritocracy. Unlike the Dilbert Principle, it doesn’t assert that managers are stupid and unimportant (because we know both to be untrue) or consider their jobs to be such. Unlike the Gervais Principle, it doesn’t believe that organizations knowingly select for cluelessness or sociopathy (although that is sometimes the case).

  • The Lumbergh Principle: an exclusive sub-organization, such as an executive suite, that wishes to remain exclusive will select for non-stickiness, which is negatively correlated with most desirable personal traits. Over time, this will degrade the quality of people in the leadership ranks, and destroy the organization.

If it’s not clear, I named this one after Bill Lumbergh from Office Space. He’s uninspiring, devoid of charisma, and seems to hold the role of middle manager for an obvious reason: there is no chance that he would ever favor his subordinates’ interests over those of upper management. He’s friendless, and non-sticky by default. He wouldn’t, say, tell an underling that he’s undervalued and should ask for a 20% raise, or give advance notice of a project cancellation or layoff so a favored subordinate can get away in time. He’ll keep his boss’s secrets because he just doesn’t give a shit about the people who are harmed by his doing so. No one likes him and he likes no one, and that’s why upper management trusts him.

Being non-sticky and being incompetent aren’t always the same thing, but they tend to correlate often enough to represent a common case (if not the most common case) of an incompetent’s promotion. Many people who are non-sticky are that way because they’re disliked and alienating to other people, and while there are many reasons why that might be, incompetence is a common one. Good software engineers are respected by their peers and tend to make friends at the bottom. Bad software engineers who play politics and manage up will be unencumbered by friends at the bottom.

To be fair, the desire to keep the management ranks exclusive is not the only reason why non-stickiness is valued. A more socially acceptable reason for it is that non-sticky people are more likely to be “fair” in an on-paper sense of the word. They don’t give a damn about their subordinates, their colleagues, and possible future subordinates, but they don’t-give-a-damn equally. Because of this, they support the organization’s sense of itself as a meritocracy. Non-sticky people are also, in addition to being “fair” in a toxic way that ultimately serves the needs of upper management only, quite consistent. As corporations would rather be consistent than correct– firing the wrong people (i.e. firing competent people) is unfortunate, but firing inconsistently opens the firm to a lawsuit– they are attractive for this reason as well. You can always trust the non-sticky person, even if he’s disliked by his colleagues for a good reason, to favor the executives’ interests above the workers’. The fact that most non-sticky people absolutely suck as human beings is held to be irrelevant.

Exceptions

As I get older and more experienced, I’m increasingly aware that there’s a lot of diversity in how organizations run themselves. We’re not condemned to play out roles of “Loser”, “Clueless”, or “Sociopath”. So it’s worth acknowledging that there are a lot of cases in which the Lumbergh Principle doesn’t apply. Some organizations try to pick competent leaders, and it’s not inevitable that an organization develops such a contempt for its own workers as to define the middle-management job in such a start way. Also, the negativity that is often directed at middle management fails to account for the fact that upper management almost always has to pass through that tier in some way or another. Middle management gets its stigma because of the terminal middle managers with no leadership skills; the ones promoted into those roles because, while defective, their superiors could trust them. However, there are many other reasons why people pass through middle management roles, or take them on because they believe that the organization needs them to do so.

The Lumbergh Principle only takes hold in a certain kind of organization. That’s the good news. The bad news is that most organizations are that type. It has to do with internal scarcity. At some point, organizations decide that there’s a finite amount of “goodness”, whether we’re talking about autonomy or trust or credibility, that exists and it leaves people to compete for these artificially limited benefits. Employee stack ranking is a perfect example of this: for one person to be a high performer, another must be marked low. When a scarcity mentality sets in, R&D is slashed and business units are expected to compete for internal clients in order to justify themselves, which means that these “intrapreneurial” efforts face the worst of both worlds between being a startup and being in a large organization. It invariably gets ugly, and a zero-sum mentality takes hold. At this point, the goal of the executive suite becomes maintaining position rather than growing the company, and invitation into the inner circle (and the concentric circles that comprise various tiers of management) are given only to replace outgoing personnel, with a high degree of preference given to those who can be trusted not to let conscience get in in the way of the executives’ interests.

One might expect that startups would be a way out. Is that so? The answer is: sometimes. It is generally better, under this perspective, for an organization to be growing than stagnant. It’s probably better, in many cases, to be small. At five people, it’s far more typical to see the “live or die as a group” mentality than the internal adversity that characterizes large organizations. All of that said, there are quite a number of startups that already operate under a scarcity mentality, even from inception. The VCs want it that way, so they demand extreme growth and risk-seeking on (relative to the ambitions they require) a shoestring budget and call it “scrappy” or “lean”. The executives, in turn, develop a culture of stinginess wherein small expenses are debated endlessly. Then the middle managers bring in that “Agile”/Scrum bukkake in which programmers have to justify weeks and days of their own fucking working time in the context of sprints and iterations and glass toys. One doesn’t need a big, established company to develop the toxic scarcity mentality that leads to the Lumbergh Effect. It can start at a company’s inception– something I’ve seen on multiple occasions. In that case, the Lumbergh Effect exists because the founders and executives have a general distrust for their own people. That said, the tendency of organizations (whether democratic or monarchic on paper) toward oligarchy means that they need to trust someone. Monarchs need lieutenants, and lords need vassals. The people who present themselves as natural candidates for promotion are the non-sticky ones who’ll toss aside any allegiances to the bottom. However, those people are usually non-sticky because they’re disliked, and they’re usually disliked because they’re unlikeable and incompetent. It’s through that dynamic– not intent– that most companies end up being middle-managed (and, after a few years, upper-managed) by incompetents.

Advanced Lumberghism

What makes the Lumbergh Principle so deadly is the finality of it. The Peter Principle, were it true, would admit an easy solution: just fire the people who’ve plateaued. (Some companies do exactly that, but it creates a toxic culture of stack-ranking and de facto age discrimination.) The Dilbert Principle has a similar solution: if you are going to “promote” someone into a dead end, as a step in managing that person out, make sure to follow through. As for the Gervais Principle, it describes an organization that is already in an advanced state of dysfunction (but, it is so useful because most organizations are in such states) and, while it captures the static dynamics (i.e. the microstate transitions and behaviors in a certain high-entropy, degenerate macrostate) it does not necessarily tell us why decay is the norm for human organizations. I think that the Lumbergh Effect, however, does give us a cohesive sense of it. It doesn’t go far enough to say that “the elite” is the problem, because while elites are generally disliked, they’re not always harmful. The Lumbergh Effect sets in when the elite’s desire to protect its boundaries results in the elevation of a middling class of non-virtuous people, and as such people become the next elite (through attrition in the existing one) the organization falls to pieces. We now know, at least in a large proportion of cases, the impulses and mechanics that bring an organization to ruin.

Within organizations, there’s always an elite. Governments have high officials and corporations have executives. We’d like for that elite to be selected based on merit, but even people of merit dislike personal risk and will try to protect their positions. Over time, elites form their own substructures, and one of those is an outer “shell”. The lowest-ranking people inside that elite, and the highest-ranking people outside of it who are auditioning to get in, take on guard duty and form the barrier. Politically speaking, the people who live at that shell (not the cozy people inside or the disengaged outsiders who know they have no chance of entering) will be the hardest-working (again, an effort thermocline) at defining and guarding the group’s boundaries. Elites, therefore, don’t recruit for their “visionary” inner ranks or their middling directorate, because you have to serve at the shell before you have a chance of getting further in. Rather, they recruit guards: non-sticky people who’ll keep the group’s barriers (and its hold over the resources, information, and social access that it controls) impregnable to outsiders. The best guards, of course, are those who are loyal upward because they have no affection in the lateral or downward directions. And, as discussed, such people tend to be that way because no one likes them where they are. That this leads organizations to the systematic promotion of the worst kinds of people should surprise no one.

Can tech fix its broken culture?

I’m going to spoil the ending. The answer is: yes, I think so. Before I tackle that matter, though, I want to address the blog post that led me to write on this topic. It’s Tim Chevalier’s “Farewell to All That” essay about his departure from technology. He seems to have lost faith in the industry, and is taking a break from it. It’s worth reading in its entirety. Please do so, before continuing with my (more optimistic) analysis.

I’m going to analyze specific passages from Chevalier’s essay. It’s useful to describe exactly what sort of “broken culture” we’re dealing with, in order to replace a vague “I don’t like this” with a list of concrete grievances, identifiable sources and, possibly, implementable solutions.

First, he writes:

I have no love left for my job or career, although I do have it for many of my friends and colleagues in software. And that’s because I don’t see how my work helps people I care about or even people on whom I don’t wish any specific harm. Moreover, what I have to put up with in order to do my work is in danger of keeping me in a state of emotional and moral stagnation forever.

This is a common malaise in technology. By the time we’re 30, we’ve spent the better part of three decades building up potential and have refined what is supposed to be the most important skill of the 21st century. We’d like to work on clean energy or the cure for cancer or, at least, creating products that change and save lives (like smart phones, which surely have). Instead, most of us work on… helping businessmen unemploy people. Or targeting ads. Or building crappy, thoughtless games for bored office workers. That’s really what most of us do. It’s not inspiring.

Technologists are fundamentally progressive people. We build things because we want the world to be better tomorrow than it is today. We write software to solve problems forever. Yet most of what our employers actually make us do is not congruent with the progressive inclination that got us interested in technology in the first place. Non-technologists cannot adequately manage technologists because technologists value progress, while non-technologists tend to value subordination and stability.

Open source is the common emotional escape hatch for unfulfilled programmers, but a double-edged sword. In theory, open-source software advances the state of the world. In practice, it’s less clear cut. Are we making programmers (and, therefore, the world) more productive, or are we driving down the price of software and consigning a generation to work on shitty, custom, glue-code projects? This is something that I worry about, and I don’t have the answer. I would almost certainly say that open-source software is very much good for the world, were it not for the fact that programmers do need to make money, and giving our best stuff away for free just might be hurting the price for our labor. I’m not sure. As far as I can tell, it’s impossible to measure that counterfactual scenario.

If there’s a general observation that I’d make about software programmers, and technologists in general, it’s that we’re irresponsibly adding value. We create so much value that it’s ridiculous, and so much that, by rights, we ought to be calling the shots. Yet we find value-capture to be undignified and let the investors and businessmen handle that bit of the work. So they end up with the authority and walk away with the lion’s share; we’re happy if we make a semi-good living. The problem is that value (or money) becomes power, and the bulk of the value we generate accrues not to people who share our progressive values, but to next-quarter thinkers who end up making the world more ugly. We ought to fix this. By preferring ignorance over how the value we generate is distributed and employed, we’re complicit in widespread unemployment, mounting economic and political inequality, and the general moral problem of the wrong people winning.

I don’t spend much time solving abstract puzzles, at least not in comparison to the amount of time I spend doing unpaid emotional labor.

Personally, I care more about solving real-world problems and making peoples’ lives better than I do about “abstract puzzles”. It’s fun to learn about category theory, but what makes Haskell exciting is that its core ideas actually work at making quickly developed code robust beyond what is possible (within the same timeframe; JPL-style C is a different beast) in other languages. I don’t find much use in abstract puzzles for their own sake. That said, the complaint about “unpaid emotional labor” resonates with me, though I might use the term “uncompensated emotional load”. If you work in an open-plan office, you’re easily losing 10-15 hours of your supposedly free time just recovering from the pointless stress inflicted by a bad work environment. I wouldn’t call it an emotional “labor”, though. Labor implies conscious awareness. Recovering from emotional load is draining, but it’s not a conscious activity.

But the tech industry is wired with structural incentives to stay broken. Broken people work 80-hour weeks because we think we’ll get approval and validation for our technical abilities that way. Broken people burn out trying to prove ourselves as hackers because we don’t believe anyone will ever love us for who we are rather than our merit.

He has some strong points here: the venture-funded tech industry is designed to give a halfway-house environment for emotionally stunted (I wouldn’t use the word “broken”, because immaturity is very much fixable) fresh college grads. That said, he’s losing me on any expectation of “love” at the workplace. I don’t want to be “loved” by my colleagues. I want to be respected. And respect has to be earned (ideally, based on merit). If he wants unconditional love, he’s not going to find that in any job under the sun; he should get a dog, or a cat. That particular one isn’t the tech industry’s fault.

Broken people believe pretty lies like “meritocracy” and “show me the code” because it’s easier than confronting difficult truths; it’s as easy as it is because the tech industry is structured around denial.

Meritocracy is a useless word and I think that it’s time for it to die, because even the most corrupt work cultures are going to present themselves as meritocracies. The claim of meritocracy is disgustingly self-serving for the people at the top.

“Show me the code” (or data) can be irksome, because there are insights for which coming up with data is next to impossible, but that any experienced person would share. That said, data- (or code-)driven decision making is better than running on hunches, or based on whoever has the most political clout. What I can’t stand is when I have to provide proof but someone else doesn’t. Or when someone decides that every opinion other than his is “being political” while his is self-evident truth. Or when someone in authority demands more data or code before making a ruling, then goes on to punish you for getting less done on your assigned work (because he really doesn’t want you to prove him wrong). Now those are some shitty behaviors.

I generally agree that not all disputes can be resolved with code or data, because some cases require a human touch and experience; that said, there are many decisions that should be handled in exactly that way: quantitatively. What irks me is not a principled insistence on data-driven decisions, but when people with power acquire the right to make everyone else provide data (which may be impossible to come by) while remaining unaccountable, themselves, to do the same. And many of the macho jerks who overplay the “show me the code” card (because they’ve acquired undeserved political power), when code or data are too costly to acquire. are doing just that.

A culture that considers “too sensitive” an insult is a culture that eats its young. Similarly, it’s popular in tech to decry “drama” when no one is ever sure what the consensus is on this word’s meaning, but as far as I can tell it means other people expressing feelings that you would prefer they stay silent about.

I dislike this behavior pattern. I wouldn’t use the word “drama” so much as political. Politically powerful bad actors are remarkably good at creating a consensus that their political behaviors are apolitical and “meritocratic”, whereas people who disagree with or oppose them are “playing politics” and “stirring up drama”. False objectivity is more dangerous than admitted subjectivity. The first suits liars, the second suits people who have the courage to admit that they are fallible and human.

Personally, I tend to disclose my biases. I can be very political. While I don’t value emotional drama for its own sake, I dislike those who discount emotion. Emotions are important. We all have them, and they carry information. It’s up to us to decide what to do with that information, and how far we should listen to emotions, because they’re not always wise in what they tell us to do. There is, however, nothing wrong with having strong emotions. It’s when people are impulsive, arrogant, and narcissistic enough to let their emotions trample on other people that there is a problem.

Consequently, attempting to shut one’s opponent down by accusing him of being “emotional” is a tactic I’d call dirty, and it should be banned. We’re humans. We have emotions. We also have the ability (most of the time) to put them in place.

“Suck it up and deal” is an assertion of dominance that disregards the emotional labor needed to tolerate oppression. It’s also a reflection of the culture of narcissism in tech that values grandstanding and credit-taking over listening and empathizing.

This is very true. “Suck it up and deal” is also dishonest in the same way that false objectivity and meritocracy are. The person saying it is implicitly suggesting that she suffered similar travails in the past. At the same time, it’s a brush-off that indicates that the other person is of too low status for it to be worthwhile to assess why the person is complaining. It says, “I’ve had worse” followed by “well, I don’t actually know that, because you’re too low on the food chain for me to actually care what you’re going through.” It may still be abrasive to say “I don’t care”, but at least it’s honest.

Oddly enough, most people who have truly suffered fight hard to prevent others from having similar experiences. I’ve dealt with a lot of shit coming up in the tech world, and the last thing I would do is inflict it on someone else, because I know just how discouraging this game can be.

if you had a good early life, you wouldn’t be in tech in the first place.

I don’t buy this one. Some people are passionate about software quality, or about human issues that can be solved by technology. Not everyone who’s in this game is broken.

There certainly are a lot of damaged people working in private-sector tech, and the culture of the VC-funded world attracts broken people. What’s being said here is probably 80 or 90 percent true, but there are a lot of people in technology (especially outside of the VC-funded private sector tech that’s getting all the attention right now) who don’t seem more ill-adjusted than anyone else.

I do think that the Damaso Effect requires mention. On the business side of tech (which we report into) there are a lot of people who really don’t want to be there. Venture capital is a sub-sector of private equity and considered disreputable within that crowd: it’s a sideshow to them. Their mentality is that winners work on billion-dollar private equity deals in New York and losers go to California and boss nerds around. And for a Harvard MBA to end up as a tech executive (not even an investor!) is downright embarrassing. So that Columbia MBA who’s a VP of HR at a 80-person ed-tech startup is not exactly going to be attending reunions. This explains the malaise that programmers often face as they get older: we rise through the ranks and see that, if not immediately, we eventually report up into a set of people who really don’t want to be here. They view being in tech as a mark of failure, like being relegated to a colonial outpost. They were supposed to be MDs at Goldman Sachs, not pitching business plans to clueless VCs and trying to run a one-product company on a shoestring (relative to the level of risk and ambition that it takes to keep investors interested) budget.

That said, there are plenty of programmers who do want to be here. They’re usually older and quite capable and they don’t want to be investors or executives, though they often could get invited to those ranks if they wished. They just love solving hard problems. I’ve met such people; many, in fact. This is a fundamental reason why the technology industry ought to be run by technologists and not businessmen. The management failed into it and would jump back into MBA-Land Proper if the option were extended, and they’re here because they’re the second or third tier that got stuck in tech; but the programmers in tech actually, in many cases, like being here and value what technology can do.

Failure to listen, failure to document, and failure to mentor. Toxic individualism — the attitude that a person is solely responsible for their own success, and if they find your code hard to understand, it’s their fault — is tightly woven through the fabric of tech.

This is spot-on, and it’s a terrible fact. It holds the industry back. We have a strong belief in progress when it comes to improving tools, adding features to a code base, and acquiring more data. Yet the human behaviors that enable progress, we tend to undervalue.

But in tech, the failures are self-reinforcing because failure often has no material consequences (especially in venture-capital-funded startups) and because the status quo is so profitable — for the people already on the inside — that the desire to maintain it exceeds the desire to work better together.

This is an interesting observation, and quite true. The upside goes mostly to the well-connected. Most of the Sand Hill Road game is about taking strategies (e.g. insider trading, market manipulation) that would be illegal on public markets and applying them to microcap private equities over which there are fewer rules. The downside is borne by the programmers, who suffer extreme costs of living and a culture of age discrimination on a promise of riches that will usually never come. As of now, the Valley has been booming for so long that many people have forgotten that crashes and actual career-rupturing failures even exist. In the future… who knows?

As for venture capital, it delivers private prosperity, but its returns to passive investors (e.g. the ones whose money is being invested, as opposed to the VCs collecting management fees) are dreadful. This industry is not succeeding, except according to the needs of the well-connected few. What’s happening is not “so profitable” at all. It’s not actually very successful. It’s just well-marketed, and “sexy”, to people under 30 who haven’t figured out what they want to do with their lives.

I remember being pleasantly amazed at hearing that kind of communication from anybody in a corporate conference room, although it was a bit less nice when the CTO literally replied with, “I don’t care about hurt feelings. This is a startup.”

That one landed. I have seen so many startup executives and founders justify bad behavior with “this is a startup” or “we’re running lean”. It’s disgusting. It’s the False Poverty Effect: people who consider themselves poor based on peer comparison will tend to believe themselves entitled to behave badly or harm others because they feel like it’s necessary in order to catch up, or that their behavior doesn’t matter because they’re powerless compared to where they should be. It usually comes with a bit of self-righteousness, as well: “I’m suffering (by only taking a $250k salary) for my belief in this company.”The false-poverty behavior is common in startup executives, because (as I already discussed) they’d much rather be elsewhere– executives in much larger companies, or in private equity.

I am neither proud of nor sorry for any of these lapses, because ultimately it’s capitalism’s responsibility to make me produce for it, and within the scope of my career, capitalism failed. I don’t pity the ownership of any of my former employers for not having been able to squeeze more value out of me, because that’s on them.

I have nothing to say other than that I loved this. Ultimately, corporate capitalism fails to be properly capitalistic because of its command-economy emphasis on subordination. When people are treated as subordinates, they slack and fade. This hurts the capitalist more than anyone else.

Answering the question

I provided commentary on Tim Chevalier’s post because not only is he taking on the tech industry, but he’s giving proof to his objection by leaving it. Tech has a broken culture, but it’s not enough to issue vague complaints as many do. It’s not just about sexism and classism and Agile and Java Shop politics in isolation. It’s about all of that shit, taken together. It’s about the fact that we have a shitty hand-me-down culture from those who failed out of the business mainstream (“MBA Culture”) and ended up acquiring its worst traits (e.g. sexism, ageism, anti-intellectualism). It’s about the fact that we have this incredible skill in being able to program, and yet 99 percent of our work is reduced to a total fucking joke because the wrong people are in charge. If we care about the future at all, we have to fight this.

Fixing one problem in isolation, I’ll note, will do no good. This is why I can’t stand that “lean in” nonsense that is sold to unimaginative women who want some corporate executive to solve their problems. You cannot defeat the systemic problems that disproportionately harm women, and maintain the status quo at the same time. You can’t take an unfair, abusive system designed to concentrate power and “fix” it so that it is more fair in one specific way, but otherwise operates under the same rules. You can’t have a world where it is career suicide to take a year off of work for any reason except to have a baby. If you maintain that cosmetic obsession with recency, you will hurt women who wish to have children. You have to pick: either accept the sexism and ageism and anti-intellectualism and the crushing mediocrity of what is produced… or overthrow the status quo and change a bunch of things at the same time. I know which one I would vote for.

Technology is special in two ways, and both of these are good news, at least insofar as they bear on what is possible if we get our act together. The first is that it’s flamingly obvious that the wrong people are calling the shots. Look at many of the established tech giants. In spite of having some of the best software engineers in the world, many of these places use stack ranking. Why? They have an attitude that software engineering is “smart people work” and that everything else– product management, people management, HR– is “stupid people work” and this becomes a self-fulfilling prophecy. You get some of the best C++ engineers in the world, but you get stupid shit like stack ranking and “OKRs” and “the 18-month rule” from your management.

It would be a worse situation to have important shots called by idiots and not have sufficient talent within our ranks to replace them. But we do have it. We can push them aside, and take back our industry, if we learn how to work together rather than against each other.

The second thing to observe about technology is that it’s so powerful as to admit a high degree of mismanagement. If we were a low-margin business, Scrum would kill rather than merely retard companies. Put simply, successful applications of technology generate more wealth than anyone knows what to do with. This could be disbursed to employees, but that’s rare: for most people in startups, their equity slices are a sad joke. Some of it will be remitted to investors and to management. A great deal of that surplus, however, is spent on management slack: tolerating mismanagement at levels that would be untenable in an industry with a lower margin. For example, stack-ranking fell out of favor after it caused the calamitous meltdown of Enron, and “Agile”/”Scrum” is a resurrection of Taylorist pseudo-science that was debunked decades ago. Management approaches that don’t work, as their proponents desperately scramble for a place to park them, end up in tech. This leaves our industry, as a whole, running below quarter speed and still profitable. Just fucking imagine how much there would be to go around, if the right people were calling the shots.

In light of the untapped economic potential that would accrue to the world if the tech industry were better run, and had a better culture, it seems obvious that technology can fix the culture. That said, it won’t be easy. We’ve been under colonial rule (by the business mainstream) for a long time. Fixing this game, and eradicating the bad behaviors that we’ve inherited from our colonizing culture (which is more sexist, anti-progressive, anti-intellectual, classist and ageist than any of our natural tendencies) will not happen overnight. We’ve let ourselves be defined, from above, as arrogant and socially inept and narcissistic, and therefore incapable of running our own affairs. That, however, doesn’t reflect what we really are, nor what we can be.

The Sturgeon Filter: the cognitive mismatch between technologists and executives

There’s a rather negative saying, originally applied to science fiction, known as Sturgeon’s Law: “ninety percent of everything is crap“. Quantified so generally, I don’t think that it’s true or valuable. There are plenty of places where reliability can be achieved and things “just work”. If ninety percent of computers malfunctioned, the manufacturer would be out of business, so I don’t intend to apply the statement to everything. Still, there’s enough truth in the saying that people keep using it, even applying it far beyond what Theodore Sturgeon actually intended. How far is it true? And what does it mean for us in our working lives?

Let’s agree to take “ninety percent” to be a colloquial representation of “most, and it’s not close”; typically, between 75 and 99 percent. What about “is crap”? Is it fair to say that most creative works are crap? I wouldn’t even know where to begin on that one. Certainly, I only deign to publish about a quarter of the blog posts that I write, and I think that that’s a typical ratio for a writer, because I know far too well how often an appealing idea fails when taken into the real world. I think that most of the blog posts that I actually release are good, but a fair chunk of my writing is crap that, so long as I’m good at self-criticism, will never see daylight.

I can quote a Sturgeon-like principle with more confidence, in such a way that preserves its essence but is hard to debate: the vast majority (90 percent? more?) of mutations are of negative value and, if implemented, will be harmful. This concept of “mutation” covers new creative work as well as maintenance and refinement. To refine something is to mutate it, while new creation is still a mutation of the world in which it lives. And I think that my observation is true: a few mutations are great, but most are harmful or, at least, add complexity and disorder (entropy). In any novel or essay, changing a word at random will probably degrade its quality. Most “house variants” of popular games are not as playable as the original game, or are not justified by the increased complexity load. To mutate is, usually, to inflict damage. Two things save us and allow progress. One is that the beneficial mutations often pay for the failures, allowing macroscopic (if uneven) progress. The second is that we can often audit mutations and reverse a good number of those that turn out to be bad. Version control, for programmers, enables us to roll back mutations that are proven to be undesirable.

The Sturgeon Mismatch

Programmers experience the negative effects of random mutations all the time. We call them “bugs”, and they range from mild embarrassments to catastrophic failures, but very rarely is a discrepancy between what the programmer expects of the program, and what it actually does, desirable. Of course, intended mutations have a better success rate than truly “random” ones would, but even in those, there is a level of ambition at which the likelihood of degradation is high. I know very little about the Linux kernel and if I tried to hack it, my first commits would probably be rejected, and that’s a good thing. It’s only the ability to self-audit that allows the individual programmer, on average, to improve the world while mutating it. It can also help to have unit tests and, if available for the language, a compiler and a strong type system; those are a way to automate at least some of this self-censoring.

I’m a reasonably experienced programmer at this point, and I’m a good one, and I still generate phenomenally stupid bugs. Who doesn’t? Almost all bugs are stupid– tiny, random injections of entropy emerging from human error– which is why the claim (for example) that “static typing only catches ‘stupid’ bugs” is infuriating. What makes me a good programmer is that I know what tools and processes to use in order to catch them, and this allows me to take on ambitious projects with a high degree of confidence in the code I’ll be able to write. I still generate bugs and, occasionally, I’ll even come up with a bad idea. I’m also very good at catching myself and fixing mistakes quickly. I’m going to call this selective self-censoring that prevents 90 percent of one’s output from being crap the Sturgeon Filter.

With a strong Sturgeon Filter, you can export the good mutations and bury the bad ones. This is how reliability (either in an artistic masterpiece, orin  a correct, efficient program) can be achieved by unreliable creatures such as humans. I’d further argue that to be a competent programmer requires a strong Sturgeon Filter. The good news is that this filter is built up fairly quickly by tools that give objective feedback: compilers and computers that follow instructions literally, and malfunction at the slightest mistake. As programmers, we’re used to having our subordinates (compilers) tell us, “Fix your shit or I’m not doing anything.”

It’s no secret that most programmers dislike management, and have a generally negative view of the executives and “product managers” running most of the companies that employ them. This is because programmers pride themselves on having almost impermeable Sturgeon Filters, while lifelong managers have nonexistent Sturgeon Filters. They simply don’t get the direct, immediate feedback that would train them to recognize and reject their own bad ideas. That’s not because they’re stupider than we are. I don’t actually think that they are. I think that their jobs never build up the sense of fallibility that programmers know well.

Our subordinates, when given nonsensical instructions, give blunt, tactless feedback– and half the time they’re just pointing out spelling errors that any human would just ignore! Managers’ subordinates, however, are constantly telling them that they’re awesome, and will often silently clean up their mistakes. Carry this difference in experience out over 20 years or more, and you get different cultures and different attitudes. You get 45-year-old programmers who, while extraordinarily skillful, are often deeply convinced of their own fallibility; and you get 45-year-old executives who’ve never really failed or suffered at work, because even when they were bad at their jobs, they had armies of people ready to manage their images and ensure that, even in the worst case scenario where they lost jobs, they’d “fail up” into a senior position in another company.

Both sides now

Programmers and managers both mutate things; it’s the job. Programmers extend and alter the functionality of machines, while managers change the way people work. In both cases, the effects of a random mutation, or even an unwise intended one, are negative. Mutation for its own sake is undesirable.

For example, scheduling a meeting without a purpose is going to waste time and hurt morale. Hiring bad people and firing good ones will have massive repercussions. To manage at random (i.e. without a Sturgeon Filter) is almost as bad as to program at random. Only a small percentage of the changes to the way that people work that managers propose are actually beneficial. Most status pings or meetings serve no value except to allay the creeping sense of the manager that he isn’t “doing enough”, most processes that exist for executive benefit or “visibility” are harmful, and a good 90 to 99 percent of the time, the people doing the work have better ideas about how they should do it than the executives shouting orders. Managers, in most companies, interrupt and meddle on a daily basis, and it’s usually to the detriment of the work being produced. Jason Fried covers this in this talk, “Why work doesn’t happen at work”. As he says, “the real problems are … the M&Ms: the managers and the meetings”. Managers are often the last people to recognize the virtue of laziness: that constantly working (i.e. telling people what to do) is a sign of distress, while having little to do generally means that they’re doing their jobs well.

In the past, there was a Sturgeon Filter imposed by time and benign noncompliance. Managers gave bad orders just as often as they do now, but there was a garbage-collection mechanism in place. People followed the important orders, which were usually congruent already with common sense and basic safety, but when they were given useless orders or pointless rules to follow, they’d make a show of following the new rules for a month or two, then discard them when they failed to show any benefit or improvement. Many managers, I would imagine, preferred this, because it allowed them to have the failed change silently rejected without having any attention drawn to their mistake. In fact, a common mode of sub-strike resistance used in by organized labor is “the rule-follow“, a variety of slowdown in which rules were followed to the letter, resulting in low productivity. Discarding senseless rules (while following the important, safety-critical ones) is a necessary behavior of everyone who works in an organization; a person who interprets all orders literally is likely to perform at an unacceptably low level.

In the past, the passage of time lent plausible deniability to a person choosing to toss out silly policies that would quite possibly be forgotten or regretted by the person who issued them. An employee could defensibly say that he followed the rule for three months, realized that it wasn’t helping anything and that no one seemed to care, and eventually just forgot about it or, better yet, interpreted a new order to supersede the old one. This also imposed a check on managers, who’d embarrass themselves by enforcing a stupid rule. Since no one has precise recall of a months-old conversation of low general importance, the mists of time imposed a Sturgeon Filter on errant management. Stupid rules faded and important ones (like, “Don’t take a nap in the baler”) remained.

One negative side effect of technology is that it has removed that Sturgeon Filter from existence. Too much is put in writing, and persists forever, and the plausible deniability of a worker who (in the interest of getting more done, not in slacking) disobeys it has been reduced substantially. In the past, an intrepid worker could protest a status meeting by “forgetting” to attend it on occasion, or by claiming he’d heard “a murmur” that it was discontinued, or even (if he really wanted to make a point) by taking colleagues out for lunch at a spot not known for speedy service and, thus, an impersonal force just happening to half the team late for it. While few workers actually did such things on a regular basis (to make it obvious would get a person just as fired then as today) the fact that they might do so imposed a certain back-pressure on runaway management that doesn’t exist anymore. In 2015, there’s no excuse for missing a meeting when “it’s on your fucking Outlook calendar!”

Technology and persistence have evolved, but management hasn’t. Programmers have looked at their job of “messing with” (or, to use the word above, mutating) computers and software systems and spent 70 years coming up with new ways to compensate for the unreliable nature that comes from our being humans. Consequently, we can build systems that are extremely robust in spite of having been fueled by an unreliable input (human effort). We’ve changed the computers, the types of code that we can write, and the tools we use to do it. Management, on the other hand, is still the same game that it always has been. Many scenes from Mad Men could be set in a 2010s tech company, and the scripts would still fit. The only major change would be in the costumes.

To see the effects of runaway management, combined with the persistence allowed by technology, look no further than the Augean pile of shit that has been given the name of “Agile” or “Scrum”. These are neo-Taylorist ideas that most of industry has rejected, repackaged using macho athletic terminology (“Scrum” is a rugby term). Somehow, these discarded, awful ideas find homes in software engineering. This is a recurring theme. Welch-style stack ranking turned out to be a disaster, as finally proven by its thorough destruction of Enron, but it lives on in the technology: Microsoft used it until recently, while Google and Amazon still do. Why is this? What has made technology such an elephant graveyard for disproven management theories and bad ideas in general?

A squandered surplus

The answer is, first, a bit of good news: technology is very powerful. It’s so powerful that it generates a massive surplus, and the work is often engaging enough that the people doing it fail to capture most of the value they produce, because they’re more interested in doing the work than getting the largest possible share of the reward. Because so much value is generated, they’re able to have an upper-middle-class income– and upper-working-class social status– in spite of their shockingly low value-capture ratio.

There used to be an honorable, progressive reason why programmers and scientists had “only” upper-middle-class incomes: the surplus was being reinvested into further research. Unfortunately, that’s no longer the case: short-term thinking, a culture of aggressive self-interest, and mindless cost-cutting have been the norm since the Reagan Era. At this point, the surplus accrues to a tiny set of well-connected people, mostly in the Bay Area: venture capitalists and serial tech executives paying themselves massive salaries that come out of other peoples’ hard work. However, a great deal of this surplus is spent not on executive-level (and investor-level) pay but into another, related, sink: executive slack. Simply put, the industry tolerates a great deal of mismanagement simply because it can do so and still be profitable. That’s where “Agile” and Scrum come from. Technology companies don’t succeed because of that heaping pile of horseshit, but in spite of it. It takes about five years for Scrum to kill a tech company, whereas in a low-margin business it would kill the thing almost instantly.

Where this all goes

Programmers and executives are fundamentally different in how they see the world, and the difference in Sturgeon Filters is key to understanding why it is so. People who are never told that they are wrong will begin to believe that they’re never wrong. People who are constantly told that they’re wrong (because they made objective errors in a difficult formal language) and forced to keep working until they get it right, on the other hand, gain an appreciation for their own fallibility. This results in a cultural clash from two sets of people who could not be more different.

To be a programmer in business is painful because of this mismatch: your subordinates live in a world of formal logic and deterministic computation, and your superiors live in the human world, which is one of psychological manipulation, emotion, and social-proof arbitrage. I’ve often noted that programming interviews are tough not because of the technical questions, but because there is often a mix of technical questions and “fit: questions in them, and while either category is not terribly hard on its own, the combination can be deadly. Technical questions are often about getting the right answer: the objective truth. For a contrast, “fit” questions like “What would you do if given a deadline that you found unreasonable?” demand a plausible and socially attractive lie. (“I would be a team player.”) Getting the right answer is an important skill, and telling a socially convenient lie is also an important skill… but having to context-switch between them at a moment’s notice is, for anyone, going to be difficult.

In the long term, however, this cultural divergence seems almost destined to subordinate software engineers, inappropriately, to business people. A good software engineer is aware of all the ways in which he might be wrong, whereas being an executive is all about being so thoroughly convinced that one is right that others cannot even conceive of disagreement– the “reality distortion field”. The former job requires building an airtight Sturgeon Filter so that crap almost never gets through; the latter mandates tearing down one’s Sturgeon Filter and proclaiming loudly that one’s own crap doesn’t stink.