Continuous negotiation

After reading this piece about asking for raises in software engineering, I feel compelled to share a little something about negotiation. I can’t claim to be great at it, personally. I know how it works but– I’ll be honest– negotiating can be really difficult for almost all of us, myself included. As humans, we find it hard to ask someone for a favor or consideration when the request might be received badly. We also have an aversion to direct challenge and explicit recognition of social status. It’s awkward as hell to ask, “So, what do you think of me?” Many negotiations, fundamentally, are conceived to be uncomfortably close to that question. There is one thing that I’ve learned about successful workplace negotiators: they do it continuously, persistently, and creatively.

The comic-book depiction of salary negotiation is one in which the overpaid, under-appreciated employee (with arms full of folders and papers representing “work”) goes into her office and asks her pointy-haired boss for a raise: a measly 10 percent increase that she has, no doubt, earned. In a just world, she’d get it; but in the real world, she gets a line about there being “no money in the budget”. This depiction of salary negotiation gets it completely wrong. It sets it up as an episodic all-or-nothing affair in which the request must either be immediately granted (that’s rare) or the negotiator shall slink away, defeated.

Here’s why that scenario plays out so badly. Sure, she deserves a raise; but at that moment in time, the boss is presented with the unattractive proposition of paying more (or, worse yet, justifying higher payment to someone else) for the same work, rejects it, and the employee slinks away defeated and bitter. If this scenario plays out as described, it’s often a case where she failed to recognize the continually occurring opportunities for micro-negotiations.

First of all, if someone asks for a raise in good faith and is declined, that’s an opportunity to ask for something else: a better title, improved project allocation, a conference budget, and possibly the capacity to delegate undesirable work. Even if “there’s no money in the budget” is a flat-out lie, there’s nowhere to proceed on the money front– you can’t call your boss a liar and say, “I think there is” or “Have you checked?”– so you look for something else that might be non-monetary, like a better working space. Titles are a good place to start. People tend to think that they “don’t matter”, but they actually matter a great deal, as public statements about how much trust the organization has placed in a person’s competence. They’re also given away liberally when managers aren’t able to give salary increases to people who “obviously deserve” them. Don’t get me wrong: I’d take a 75% pay raise over a fancy title; but if a raise isn’t in the question, then I’d prefer to ask for something else that I might actually get. When cash is tight, titles are cheap. As things improve, who gets first pick of the green-field, new projects that emerge? The people with the strongest reputations, of which titles are an important and formal component. When cash is abundant, it usually flows to the people on or near those high-profile projects.

Many things that we do, as humans, are negotiations, often subtle. Take a status meeting (as in project status, of course). Standup is like Scrabble. Bad players focus on the 100-point words. Good players try not to open up the board. Giving elaborate status updates is to focus on the 100-point words at the expense of strategic play. A terse, effective, update is a much better play. If you open yourself up to a follow-on question about project status (e.g. why something took a certain length of time, or needed to be done in a certain way) then you’ve done it wrong. You put something on the board that should have never gone there. The right status (here meaning social status) stance to take when giving (project) status is: “I will tell you what I am doing, but I decide how much visibility others get into my work, because there are people who are audited and people who are implicitly trusted and the decision has already been made that I’m in the second category… and I’m pretty sure we agree on this already but if you disagree, we’re gonna fucking dance.” When you give a complete but terse status update, you’re saying, “I’m willing to keep you appraised of what I’m up to, because I’m proud of my work, but I refuse to justify time because I work too hard and I’m simply too valuable to be treated as one who has to justify his own working time.”

Timeliness is another area of micronegotiation, and around meetings one sees a fair amount of status lateness (mixed with “good faith” random lateness that happens to everyone). The person who shows up late to a status meeting is saying, “I have the privilege of spending less time giving status (a subordinate activity) than the rest of you”. The boss who makes the uncomfortable joke-that-isn’t about that person being habitually late is saying, “you’re asking for too much; try being the 4th-latest a couple of times”. As it were, I think that status lateness is an extremely ineffective form of micronegotation– unless you can establish that the lateness is because you’re performing an important task. Some “power moves” enhance status capital by exploiting the human aversion to cognitive dissonance (he’s acting like an alpha, so he must be an alpha) but others spend it, and status lateness tends to be one that spends it, because lateness is just as often a sign of sloppiness as of high value. Any asshole can be late, and the signature behavior of a true high-status person is not habitual minor lateness. In fact, actual high-power people, in general, are punctual and loyal and willing to do the grungiest of the grunge work for an important project or mission, but become magically unavailable for the unimportant stuff. If you’re looking to ape the signature of a high-power person (and I wouldn’t recommend achieving it via status lateness, because there are better ways) you shouldn’t do it by being 10 minutes late for every standup. That just looks sloppy. You do it by being early or on time, most of the time, and missing a few such meetings completely for an important reason. (“Sorry that I missed the meeting. I was busy with <something that actually matters.>”) Of course, you have to do this in a way that doesn’t offend, humiliate, or annoy the rest of the team, and it’s so hard to pull that off with status lateness that I’d suspect that anyone with the social skills to do it does not need to take negotiation advice from the likes of me.

Most negotiation theory is focused on large, episodic negotiations as if those were the way that progress in business is made. To be sure, those episodic meetings matter quite a bit. There’s probably a good 10-40 percent of swing space (at upper levels, much more!) in terms of the salary available to a person at a specific career juncture. However, what matters just as much is the preparation through micronegotations. Someone with the track record of a 10-years-and-still-junior engineer isn’t isn’t in the running for $250,000/year jobs no matter how good he is at episodic salary negotiations. It’s hard to back up one’s demand for a raise if one is not perceived as a high performer, and that has as much to do with project allocation as with talent and raw exertion, and getting the best projects usually comes down to skilled micronegotiations (“hey, I’d like to help out”). In the workplace, when it comes to higher pay or status, the episodic negotations usually come far too late– after a series of missed micronegotiation opportunities. One shouldn’t wait until one is underpaid, underappreciated, under-challenged, or overwhelmed with uninteresting work, because “the breaking point” is too late. The micronegotiations have to occur over time, and they must happen so fluently that most people aren’t even aware that the micronegotiations exist.

One upside of micronegotiation over episodic negotiation is that it’s rarely zero-sum. When you ask for a $20,000 raise directly (instead of something that doesn’t cost anything, like an improved title or more autonomy or a special project) you are marking a spot on a zero-sum spectrum, and that’s not a good strategy because you want your negotiating partner to be, well, a partner rather than an adversary. Micronegotiations are usually not zero-sum, because they usually pertain to matters that have unequal value to the parties involved. Let’s say that you work in an open-plan office. For programmers, they’re suboptimal and it’s probably not wise to ask for a private office; but some seats are better than others. Noise can be tuned out; visibility from behind is humiliating, stress-inducing, and depicts a person as having low social status. If you say to your boss, “I think we agree that I have a lot of important stuff on my plate, and I want the next seat in row A that becomes available”, getting a wall at your back, you’re not marking a spot on a zero-sum spectrum, because the people who make the decision as to whether you get a Row-A seat are generally not competing with you for that spot. So it’s no big deal for them to grant it to you. Instead, you’re finding a mutually beneficial solution where everyone wins: you get a better working space, and you’re no longer seen from behind (bringing a subtle improvement to the perception of your status, character, and competency, because the wall at your back depicts you as one who doesn’t fully belong in an open-plan office, but is “taking one for the team” by working in the pit) while your boss gets more output and is being asked for a favor (cf. Ben Franklin) that will demand less from him than a pay raise under a budget freeze.

The problem with software engineers isn’t that they’re bad at episodic salary negotiations. No one is good at those. If you’ve let yourself become undervalued by such a substantial amount that it “comes to a head”, you’re in a position that takes a lot of social acumen to talk one’s way out of. It’s that they aren’t aware of the micronegotiations that are constantly happening around them. To be fair, many micronegotiations seem like the opposite: humility. When you hold the elevator for someone, you’re not self-effacingly representing your time as unimportant; instead, you’re showing that you understand the other person’s value and importance, which is a way of encouraging the other person to likewise value you. The best micronegotiators never seem to be out for themselves, but looking out for the group. It’s not “let’s get this shitty task off my plate and throw it into some dark corner of the company” but, “let’s get together and discuss how to remove some recurring commitments from the team”.

What does good negotiation look like? Well, I’m at 1,700 words, and it would take another 17,000 to scratch the surface, and I’m far from being an expert on that topic. What it isn’t, most of the time, is formal and episodic. It’s continuous, and long-term-oriented, and often positive-sum. When you ask for something, whether it’s a pay raise or a better seat in the office, it’s OK to walk away without it. What you can’t leave on the table is your own status; you can leave as one who didn’t get X, but you can’t leave as a person who didn’t deserve X. If your boss can’t raise your pay, get a title bump and better projects, and thank him in advance for keeping you in mind when the budget’s more open. If a wall at your back or a private office isn’t in the cards, then get a day per week of WFH and make damn sure that it’s your most productive day. This way, even if you’re not getting exactly the X that you asked for, you’re allowing a public statement to stand that, once an X becomes available, you deserve it.

Underappreciated workers don’t need to read more about episodic negotiations and BATNA and “tactics”. They need to learn how to play the long game. Long-game negotiation advice doesn’t sell as well because, well, it takes years before results are achieved; but, I would surmise, it’s a lot more effective.

Cool vs. powerful

Early this morning, this article crossed my transom: Why I Won’t Run Another Startup. It’s short, and it’s excellent. Go read it.

It brought to mind an interesting social dynamic that, I think, is highly relevant to people trying to position themselves in an economy that is increasingly fluid, but still respects many of the old rules. In my mind, the key quote is here, and it agrees with my own personal experience in and around startups, having been on both sides of the purchasing discussion:

Every office-bound exec wants to love a startup. Like a pet. But no one wants to buy from a startup. Especially big companies. Big companies want to buy from big, stable businesses. They want to trust that you’ll still be around in a few years. And their people need to feel you’re a familiar name.

Startups are cool. Someone who is putting his reputation and years of emotional and financial investment at risk, for gold or glory, conforms to a charismatic archetype. That “cool” status might beget power– but usually not. People like scrappy underdogs, but they don’t trust them. Being “scrappy” or “lean” makes you cute, and it might inspire in others a mild desire to protect you, but you don’t have power until people want you to protect them.


One of the more obvious examples of “cool versus powerful” is in an urban nightclub scene, which has its own intriguing sociology. Nightclub and party scenes are staunchly elitist and hierarchical but, at the same time, eager to flout the mainstream concept of social status. A 47-year-old corporate executive worth $75 million might be turned away at the door, while a 21-year-old male model gets in because he knows the promoter. Casinos have a similar dynamic: by design, pure randomness (except in poker and, to a degree, in blackjack) can pull you out to be either a gloating winner or a stupendous loser for the night. The gods of the dice are egalitarian with regard to the “real world”. People are attracted to both of these scene because they have definitions of cool that are often contrary to those of mainstream, “square”, society.

On Saturday night at the club, old status norms are inverted or ignored. In a reversal of uncool corporate patriarchy, the young outrank the old, women outrank men, and having friends who are club promoters matters more than having friends who are hiring managers or investors. Such is “cool”. In fact, cool may be fickle, but it can make a great deal of money while it lasts. Most cool people will be poor, unable either to bottle their own lightning or to exploit others’ electricity in a useful way, but a few who open the right nightclub in the right spot will make millions. Overtly trying to make money (given that most cool people, although being middle- to upper-middle-class in native socioeconomic status, have very little cash on hand due to youth and underemployment) is deliberately uncool. In fact, most of the money made in the cool industry is from uncool people who want in, e.g. investment bankers whose only hope of entry is to drop $750 for a $20 bottle of vodka (“bottle service”).

Cool rarely leads to meaningful social status, and it doesn’t last. I’m writing this at 6:30 on a Wednesday morning in Chicago; at this exact moment and place, who knows which club promoter in L.A. means nothing. (I’m also a 31-year-old married man. Besides, if I did care to try for cool– I wasn’t so successful when I was the right age for it– I’d tell the U.S. nightlife to fuck itself and head for Budapest’s ruin pubs; but that’s another story.) Cool rarely lasts after the age of 30, an age at which people are just starting to have actual power. And while one of the most powerful things (in terms of having a long-term effect on humanity) one can do is to contribute to posterity either as a parent or a teacher, both roles are decidedly uncool.

Open-plan offices

One of my least favorite office trends is that toward cramped, noisy spaces: the open-plan office. Somehow, the Wolf of Wall Street work environment became the coolest layout in the working world. It’s ridiculously ineffective: it causes people to be sicker and less productive, and while the open-plan layout is sold as being “collaborative”, it actually turns adversarial pretty quickly. It’s a recurring 9-hour economy class plane ride to nowhere, which is not exactly the best theater for human relationships or camaraderie. On an airplane, people just want their pills or booze to kick in so they can forget their physical discomfort for long enough to sleep, and they’re so cranky that even the flight attendant offering free beverages annoys them; in an office, they just want to put their headphones on and get something the fuck done.

Why is an open-plan office “cool”? Those who tend to view management in the worst possible light will say that open-plan is about surveillance, control, and ego-fulfillment for the bosses. Lackeys who trust management implicitly actually believe the nonsense about these spaces being “collaborative”. Neither is correct. The open-plan monster is actually about marketing. “Scrappy” startups have to sell themselves constantly to investors and clients. The actual getting done of work is quite subtle. Show me a quiet workplace where the median age is 45, people have offices with doors, half the staff are women, and there are mathematical scribblings on the whiteboards, and you’ve shown me a place where a lot’s getting done, but it doesn’t look productive from the “pattern matching” viewpoint of a SillyCon Valley venture capitalist. At 10:30 on a random Tuesday, all I’m going to see are people (old people! women with pictures of kids on their desks! people who leave at 5:30!) typing abstract symbols into computers. Are they writing formal verification software that will save lives… or are they playing some complicated text adventure game that happens to run in emacs and just look like Haskell code? If I’m an average VC, I won’t have a clue. Show me a typical open-plan startup office, and it immediately looks frantically busy, even if little’s getting done.

Being in an open-plan office makes you cool but it lowers your social status. There’s no contradiction there, because coolness and power often contradict. It makes you cool because it shows that you’re young and adaptable to the startup’s ravenous appetite for attractiveness– to investors and clients. The company’s not established or trusted yet, so it needs to strike a powerful image, and if you work in a trading-floor environment (for 1/7th of what your trading counterpart is paid in order to compensate for that environment) then you’re doing your part to create that image. You’re pitching in to the startup’s overarching need to market itself; you’re a team-player. (If you want to get actual work done, do it before 10:00 or after 5:00.) By accepting the otherwise degrading work situation of being visible from behind, you’re part of what makes that “scrappy underdog” an emotional favorite: the cool factor.

All of that said, people with status and power avoid visibility into many aspects of their work. Always. This is visible even in physical position. Even in an “egalitarian” open-plan office, the higher status people will, over time as seats shuffle, be less visible from behind than the peons. A newly-hired VP might face undesirable lines of sight in his first six months, but after a couple years, he’ll be in the row with a wall at his back.

One thing that I have learned is that it’s best if no one knows how hard you’re working. I average about 50 hours per week but, occasionally, I slack and put in a 3-hour day. Other times, I throw down and work 14-hour days (much of that at home). I prefer that no one know which is happening at the time. I certainly don’t want to be perceived as the hardest-working person in the office (low status) or the least hard-working (low commitment). Being “the Office X” for any X is undesirable; it’s OK to be liberal (or conservative, or Christian, or atheist, or feminist) and known for it, but you don’t want to be the Office Liberal, or the Office Conservative, or the Office Christian, or the Office Atheist, or the Office Feminist. Likewise, you never want to be the Office Slacker or the Office Workhorse. So on the rare day that I do need to slack, I up-moderate the appearance of working hard and do a couple tasks on my secret backlog of things that look hard but only take a couple minutes; and when I am working harder than anyone else, I down-moderate that appearance so that whatever I achieve seems more effortless than it actually was, because visible sacrifice or extreme effort might make one a “team player” but it’s a low-status move. That said, even if my work effort were exactly socially optimal (75th percentile in a typical startup, or 50 hours per week) I would still want uncertainty about how much I’m working. Let’s say that 10 hours per day is the socially optimal work effort and I’m working exactly that. Still, if anyone else knows that I’m working exactly that much, then I utterly lose, status wise, compared to the “wizard” who works the exact same amount but has completely avoided visibility into his work and might be working 3 hours per day and might be working 17. Being “pinpointed”, even if you’re at the exact right spot, makes you a loser. That’s why I hate “Agile” regimes that are designed to pinpoint people. Ask around about the work effort of a high-status person (like a CEO) and, because he’s not pinpointed, people will see what they want to see. Those who value effort will perceive an all-in hard worker, while those who admire talented slackers will see her as a supremely efficient “10x” sort of person.

This is what young people generally don’t get– and that older people usually understand through experience, making them less of a “culture fit” for the more cultish companies– about “Agile” and open-plan offices and violent transparency. Allowing extreme visibility into your work, as the “Agile” fad that is destroying software engineering demands, makes you cool. It makes you well-liked and affable. However, it disempowers you, even if your work commitment is exactly the socially optimal amount. It makes you a pretty little boy (or girl); not a man or woman. It makes you “a culture fit” but never a culture maker.

When you let people know how hard (or how little) you work, you’re giving away critical information and getting nothing in return. How little or how much you work can always be used against you; if you visibly work hard, people might see your efforts as unsustainable; they might distrust you on the suspicion that you have ulterior motives, like a rogue trader who never takes vacation; they might start tossing you undesirable grunt work assuming you’ll do it with minimal complaint; or they might think that you’re incompetent and have to work long hours to make up for your lack of natural ability. If you’re smart, you keep that information close to your chest. Just as your managers and colleagues should know nothing about your sex life, whether you’re having a lot of sex or none or an average amount; they should not know how many hours you’re actually working, whether you’re the biggest slacker or the hardest worker or right in the middle.

The most powerful statements that a person makes are what she gives, expecting nothing in return. It is not always a low-status move to do give something and ask for nothing back. Giving people no-strings gifts that help them and don’t hurt you is not just ethically good, but it also improve your status by showing that you have good judgment. Giving people gifts that don’t help them, but that hurt you, either supplicates you or shows that you have terrible judgment. No one gains anything meaningful when you provide Agile-style micro-visibility into your work– executives don’t make better decisions, the team doesn’t gel any better– but you put yourself at unnecessary political risk. You’re hurting yourself “for the team” but the team doesn’t actually gain anything other than information it didn’t ask for and can’t use (unless someone intends to use it politically, and possibly against you). By doing this, you signify yourself as the over-eager sort of person who unintentionally generates political messes.

The open-plan office is cool but lowers one’s status. That said, cubicles are probably worse: low status and uncool. Still, I’d rather have a private office: uncool and middle-aged, but high in status. Private space means that your work actually matters.

“I don’t care what other people think about me”

One of my favorite examples of the divergence between what is cool and what is powerful is the statement, “I don’t care what other people think about me”. It’s usually a dishonest statement. Why would anyone who means it, say it? It’s also a cool statement. Cool people don’t care (or, more accurately, don’t seem to care) what is thought about them. However, it’s disempowering. Let’s transform it into something equivalent: “I don’t care about my reputation“. That’s not so much a “cool” statement as a reckless one. Reputation has a phenomenal impact on a person’s ability to be effective, and “I don’t care if I’m effective” is a loser’s statement. And yet, reputation is, quite exactly, what others think about a person. So why is one equivalent statement cool, and the other reckless?

Usually, people who say, “I don’t care what you think about me” are saying one of two things. The first is a fuck-you, either to the person or to the masses. Being cool is somewhat democratic; it’s about whether you are popular, seen as attractive, or otherwise beloved by the masses. Appealing to power is not democratic; most peoples’ votes actually don’t count for much. (Of course, if you brazenly flip off the masses, you might offend many people who do matter, so it’s not advisable in most circumstances.) The 24-year-olds in the open-plan office who play foosball from noon till 9:00 can decide if you’re cool, but they have no say in what you’re paid, how your work is evaluated, or whether you’re promoted or fired. It’s better to have them like you than to be disliked by them, but they’re not the ones making decisions that matter. So, a person who says, “I don’t care what you think about me” is often saying, “your vote doesn’t matter”. That’s a bit of a stupid statement, because even other prole-kickers don’t like the brazen prole-kickers.

The second meaning of “I don’t care what you think about me” is “I don’t care if you like me“. That’s fundamentally different. Personally, I do care about what people think of me. Reputation is far more powerful a factor in one’s ability to be effective in anything involving other humans than is individual capability. A reputation for competence is crucial. However, I don’t really care that much about being liked. I don’t want to be hated, but there’s really no difference between being mildly disliked by someone who’d never back me in a tight spot and being mildly liked by a person who’d never back me in a tight spot. It’s all the same, along that stretch: don’t make this person as an enemy, don’t trust as a friend.

Machiavelli was half-right and half-wrong with “It is better to be feared than loved.” It is not very valuable to be vacuously “loved”, as “scrappy startups” often are. His argument was that beloved princes are often betrayed– and we see that, all the time, in Silicon Valley– whereas feared princes are less likely to be maltreated. This may apply to Renaissance politics, that period being just as barbaric (if not moreso) as the medieval era before it; but I don’t think that it applies to modern corporate politics. Being loved isn’t valuable, but being feared is detrimental as well. You don’t get what you want through fear unless what you want is to be avoided and friendless.

It is better to be considered competent than to be feared or loved. Competent at what? That, itself, is an interesting question. Take my notes, above, on why it is undesirable to provide visibility into how hard you work. If you’re a known slacker who, coasting on natural ability or acquired expertise, gets the job done and does it well, you’ve proven your competence at the job, but you’ve shown social incompetence, by definition, because people know that you’re working less hard than the rest of the team. Even if no one resents you for it, the fact that people have this information over you is one that lowers your status. Likewise, if you’re to be a reliable hard worker, you’ve shown competence at self-control and in focus; but, yet again, the fact that people know that you work longer hours than everyone else shows a bit of social incompetence. The optimal image, in terms of where you are on the piker-versus-workhorse continuum, is to be high enough in status that others’ image of you is exactly what you want it to be. I would say, then, that wants to be seen as being competent at life. It is not enough to be competent only at the job; that keeps you from getting fired, but it won’t get you promoted.

Of course, the idea that there’s such a thing as “competent at life” is ridiculous. I’m highly competent at most things that I do, but if I somehow got into professional football, I’d be one of the worst incompetents ever seen. “Competent at life” is an image, if not a hard reality. There probably isn’t such a thing, because for anyone there is a context that would humiliate that person (for me, professional football). That said, there are people who have the self-awareness and social acumen to make themselves most visible in contexts where they are the most competent (and have moments of incompetence, such as when learning a new skill, in private) and there are others who don’t. It’s best to be in the former group and therefore create the image (since there is no such reality) of universal competence.

It is better to be thought competent than to be loved or to be feared. If you are beloved but you are not respected and you are not trusted to be competent, you can be tossed aside in favor of someone who is prettier or more charismatic or younger or cooler or “scrappier” and more of an underdog; and, over time, the probability of that happening approaches one. People will feel a mild desire to protect you, but no one will come to you for protection. This is of minimal use, because the amount of emotional energy that powerful people have to expend in the protection of others is usually low; the mentor/protege dynamic of cinema is extremely rare in the real world; most people with actual power were self-mentoring. However, if you are feared, that doesn’t necessarily mean that you’ll be respected or seen as capable. Plenty of people are feared because they’re catastrophically incompetent. You’re much more likely to be avoided, isolated, and miserable, than to get your way through fear. Furthermore, it’s often necessary that blatant adversaries (i.e. someone who might damage your reputation or career to bolster his, or to settle a score) be intimidated, but you never want them to be frightened. An intimidated adversary declines to fight you, which is what you want; a frightened or humiliated one might do any number of things, most of which are bad for all parties.

Cool can disempower

It is not always undesirable to be cool or popular. Depending on one’s aims, it might be useful. Very young people are almost never powerful, and will have more fun if they’re seen as cool than if not. When you’re 17, teachers and parents and admissions officers (all uncool) have the power, so there’s a side game that’s sexier and more accessible. When you’re 23, being “cool” can get you a following and venture funding and turn you from yet-another app developer to a “founder” overnight. There is, however, a point (probably, in the late 20s) at which cool becomes a childish thing to put away. If you work well in a Scrum environment, that might make you “cool” in light of current IT fads, but it ultimately shows that you’ve excelled at subordination, which does not lend you an aura of power. (“Agile: How to be a 10X Junior Developer.”)

I am, of course, not saying that being likeable or cool are, ever, bad things. All else considered, it’s better to have them than not. They just aren’t very useful. They aren’t the same thing as status or power, and sometimes one must be chosen or the other. Open-plan culture and the startup world fetishize coolness and distract people from the uncool but important games that they’ll have to play (and get good at) in order to become adults. Ultimately, having a reputation for professional competence and being able to afford nice vacations is just more important than being considered “cool” by people who won’t remember one’s name in 10 years. At some point, you realize that it’s more important to have a career that’ll enable you to send your kids to top schools than to win the approval of people who are just a year and a half out of those schools. The “sexiness” of the startup cult seems to derive itself from those who haven’t yet learned this.

Software’s management problem

Yesterday, I posted a list of the failings of Scrum and the “Agile” fad, and the reviews have been mixed. To be honest, I find the topic of Agile rather boring. I recoil when I encounter it, because it saddles engineers with a bunch of nonsense that has nothing to do with computer science or software engineering, but the more central topic is the fact of an industry that has become really bad at management. “User stories” are a symptom, but the root problem is much deeper.

It’s easy to complain about incompetent managers or “management culture” and make fun of foolish executives when their egos cause them to flush millions of dollars in value down the toilet, but that’s fundamentally an immature person’s game. It’s much more fruitful to look into the soul of a craft or an industry, such as computer programming, beset with open-plan offices and user stories and ask, “How the fuck did this shit come into being?” It didn’t happen in a day, and it wasn’t by accident.

So why is software management typically so bad? What about our industry causes it to be poorly managed? And what can we learn from it, in order to do better?

“Everyone hates” middle management, but it’s important

Middle managers take a lot of flak from above and below, and the stock image of a middle manager isn’t a pleasant one. Whether it’s the horrendous Bill Lumbergh of Office Space, the bumbling Michael Scott in the U.S. version of The Office, or his nastier U.K. counterpart, David Brent; the image of middle management is a deeply negative one: a petty tyrant without vision, or an incompetent lackey with the ruthlessess, but not the charisma, of an executive. This, I think, exists because of a perverse need for the low to identify with the high (royalism) through a shared contempt for the “bourgeois” middle. Often, middle managers get the brunt of the negativity and even blame, for example, for terrible decisions made by executives. Some dickhead higher-up decides impose stack-ranking, but it’s middle managers who get stuck having to fire people, and who end up being the most hated people in the whole row. It’s much easier to get the low to hate the one-rank-up less-low than to overcome their desire to identify with the high.

In truth, the executive/manager distinction is something that upper-tier professional managers (i.e. “executives”) invented for their own benefit, as a way of differentiating themselves from their lower-tier counterparts. Ultimately, the job title of manager isn’t very sexy. Traditionally, a manager is someone who makes decisions pertaining to an asset that someone else owns: a financial manager allocates a wealthy person’s funds, an actor’s manager is a subordinate who manages his reputation, and a corporate manager oversees the deployment of its labor and capital. As managers (or, to use a more icy term, “handlers”) make decisions over someone else’s assets, they’re often distrusted, because the bad ones do a lot of harm to the owners of those assets. A few are unethical, using their superior knowledge over what is being managed to further their own aims rather than the interests of the owners of the resources. Other managers are abandoned when politics turns against them. At any rate, the manager of a resource is officially subordinate to the person owning that resource, and those who choose to be insubordinate are (often rightly) viewed as unethical or even as crooks.

Upper-tier professional managers began to identify themselves as executives in order to get away from the image of a subordinate, claiming a special knowledge of how to run businesses and inventing demand for it. When the manager/executive distinction formed, and to a degree even now, it wasn’t intended to be one of hierarchical rank or pay grade, but of social status. Within a group, social status gives a person the right to define how he or she is evaluated. (In fact, one of my issues with Scrum is that, while it attempts to equalize, it does so by imposing the low-status treatment– frequent requests for estimates, mandatory visibility into one’s work progress, low allowance for self-directed work, emphasis on measurability over quality– on the entire team.) A manager is responsible for putting a defined set of resources (including, and most often in the corporate setting, people) to defined tasks. It’s measurable, and a measurable job is almost always of less status than an intangible one. The job of an executive is… well, unless you’re within that high-status group yourself, you wouldn’t understand it.

Managers have hiring and firing authority but don’t get to decide how they, themselves, are evaluated. Executives, in general, do have that freedom, because their jobs aren’t as rigidly defined. While an executive will often have people (a mix of managers, assistants, and possibly other executives) reporting to him, his job isn’t to impose certain rules or processes over those people. Rather, those people are provided to assist him, not as a “company-owned resource” that he must formally manage, but toward whatever assignment he has devised for himself. Executives can fail just as they may succeed, but they’re afforded the luxury of succeeding or failing on terms that they have set. That’s the perk (and, some would argue, the definition) of being an executive.

With all of this said, being a middle manager (i.e. a manager who does not have the social status of an executive) is a decidedly unsexy job. It’s a description by exclusion: it means that a person has the responsibility of organizing other people (and is therefore exposed to any risk in their performance, in addition to her own fluctuation) but not the social status or true power that would allow her to define her own job and process of evaluation. The fact that middle managers take bumps from below and above shouldn’t be surprising. Executives are only accountable for the upkeep of their own social status in order to remain in the in-crowd, and workers are only accountable for their own performance, but managers are accountable for the performance of many people including themselves. An executive can toss blame for a fuck-up (his or someone else’s) to someone else in the organization, but a manager is stuck with responsibility for her own fuck-ups and those that occur below her (in addition to any, from above, that are thrown onto her). In many organizations, middle managers are forced into being the hardest workers, having to clean up messes made by the minimum-effort players below them and the self-interested, aggressive, and often narcissistic power players above them.

The software industry has, over the decades, de-emphasized middle management. Recognizing it as a job that few people want, it’s factored it out into roles like the “product manager”, who may or may not have reports, the “software architect” (an important role but a dangerous title) and, in some cases, ill-advised pseudo-managerial positions like “scrotum master”. To a large degree, this change has been troubling, because the disappearance of middle management capability within software engineering has made a mess of the industry. Jobs, such as creating an inclusive culture rather than a “brogrammer” culture based on AMOGing, that were traditionally the province of middle management, go undone. Typically, there is no one given the authority to do them, and doing that kind of cultural or managerial work on one’s own initiative (i.e. without formal authority) is extremely dangerous, so people avoid it.

Against “flat hierarchy”

It may surprise people that, while I champion open allocation, I’m against so-called “flat hierarchy”. The two concepts, I think, are orthogonal even if often linked. Open allocation is the idea that programmers (and, perhaps, creative workers in general) ought to be rewarded and encouraged to find more profitable uses of their time and energy. While “engineers get to work on whatever they want” isn’t an effective management strategy, I support removing authoritarian obstacles (e.g. headcount limitations, rigid job descriptions, time-in-grade mandates like Google’s “18-month rule”) that prevent them from taking initiative and enhancing their value to the company by working on something more important than stated executive priorities. It’s not that I think engineers should be able to work on whatever they want; only that they should have the right to allocate their time to existing corporate needs without being required to appeal to a specific person first. Open allocation is about equality of opportunity (i.e. you won’t be told that you can’t work on X because bullshit headcount limitations that are wielded against the politically less empowered) rather than anarchy. What I don’t think can work is “flat hierarchy” or “no middle managers”. It goes against human nature. While persistent hierarchies of people can be (and often are) toxic, we think in hierarchical terms. We group like things with like, we form taxonomies, and we understand complex systems by composing them into simpler ones… and that’s hierarchy. Once there is a certain amount of complexity in anything, humans will demand or impose a hierarchical model over it. This creates a need for people with the power and the social skills to ensure that the conceptual hierarchy is sound, and that any hierarchical relationships among people are congruent with it, and do not outlive their need without two-party consent to their continuance.

Sociologically, this means that most companies with “flat hierarchy” end up with an emergent hierarchy in spite of themselves. Plenty of self-fashioned benevolent executives wish to see a flat hierarchy below them, because it saves them from the onerous task of choosing (or asking the group to choose) formally titled middle managers, who might prove themselves untrustworthy or dangerous once given power. To her credit, the “benevolent executive” might listen equally to the “flat” array of people below her when there are four or six or possibly even fifteen. At fifty people, though, it’s almost certain that some people have a shorter, hotter line to her and her hiring, firing, allocating, and promoting authority. This means, in effect, that they’re now bosses. This is problematic, because the de facto middle managers who emerge, having no formal title or stability in position, have to compete with the rest of the group to hold status. A formally entitled manager, at least on paper, isn’t supposed to compete with his subordinates, because he’s evaluated on group achievement rather than personal glory. An informal manager, held in that position by a perception (not always reality!) of superior individual performance, is required to use that informal power to maintain said superiority, to the detriment of the less powerful.

Furthermore, as I’ve already addressed, there are jobs that only middle managers will do, because either no one wants to do them, or because it is dangerous to do them without formal authority. Conflict resolution is a big one, but there are subtler cultural jobs that emerge. Who tells the young “rock star” with a horrid personality that he needs to stop calling co-workers “pussies”, because even if the young men in his group venerate him and don’t mind him, the women who overhear it find it offensive? (The “benevolent despot” CEO is likely not around when this guy acts up.) Who decides which technical disputes (e.g. Haskell vs. Java) are important and which (tabs vs. spaces) are a waste of time? Most importantly, who mentors incoming talent and informs people of the existing political landscape (as one always exists) in order to prevent people from needlessly damaging their careers and reputations? Software engineers don’t like middle management because they don’t see what middle managers do when they do their jobs well, which is to remove politics. They’d rather pretend that they work in “politics-free” (or “meritocratic”) environments. This means that they are oblivious, because even when organizations run well, politics exists.

As a final side note on this topic, the term “political” (as a negative) is one I find irritating. When someone loses his driver’s license and is fined because he was driving drunk, that’s political, even if it’s what should happen. It’s an exercise of state power for group benefit, at individual expense. It’s politics working well. Complaints about office “politics” or past decisions being “political” are a half-handed way of saying that the decisions or environment are unfair and corrupt. I would prefer that people use those words to describe bad behavior. Don’t say that the decision was “political”; say that it was wrong and unfair.

This common conflation– of all forms of “getting political” with ill-intended political behaviors– causes harm to those who are political toward beneficial ends. One of the most time-honored way for the elite to exercise control over the middle class is to encourage in them a general aversion to “being political”. Thus you hear statements like, “I support equality for women, but I’d never become feminist or get political about it” from well-intended, if somewhat oblivious, middle-class professionals. The corporate upper class can’t say what it means– “We don’t want unions, and even professional guilds we find irritating”– so they say, “Don’t get political”, which sounds enough like the middle-middle-class “Don’t discuss politics or religion at the dinner table” to get a pass. This aversion to “being political” has become so ingrained in software engineering culture that we’ve lost sight of our need for people who are skilled at navigating and mitigating the politics that emerges naturally in groups of people. As a result, we’re becoming one of the most sexist, ageist, and classist industries in the white-collar world. Moreover, we’re still as “managed” as we would be in a more traditional environment. We still face business-driven engineering (as Agile is an attempt to “patch” business-driven engineering, despite the latter being intractably broken) and low social status and widespread incompetence in management. The fact that there are well-studied systemic reasons for technology executives to be, on average, low-quality people doesn’t help either.

Between 1997 and 2015, we’ve seen a lack of desire to fix these problems, because technology has been such a high-margin industry that it’s been able to cover up egregious mismanagement. You don’t see open-plan offices and Scrum in high-profit companies because those practices work; you see them because, if you learned how to make great phones 10 years ago or a leading search engine 18 years ago, you can have terrible management practices and still be so profitable that open-plan offices won’t outright kill you… for a while. I prefer to see the proliferation of mismanagement in software as a positive sign. If our industry is this profitable despite having emasculated the concept of middle management, and further in spite of having drooling, non-technical morons (“failed in private equity, try bossing nerds around”) leading its upper ranks, then what might it be able to do if it were properly run? I’ll answer this: a lot more.

Protect, direct, and eject

So what makes a good manager? To make it clear, the notion of being an executive and of being a manager are orthogonal ones. There are managers, there are executives, and there are managers of executives. The principles that I’m getting into here apply both inside and outside of the executive in-crowd.

Hence we can focus the core job of any manager: protect, direct, and eject. The best subordinates will need to spread their wings in order to remain engaged in the work, but they need to be protected from the political minefields that overperformers often unknowingly enter, and they need to be insulated from executives– in particular, given guidance about which higher-ups are friendly and which ones are toxic malefactors on power trips. With the strongest subordinates, it’s almost a guarantee that they’ll want to take on bigger challenges, and the manager’s job is to protect them politically while they do so. The middling subordinates, who tend to show less initiative and confidence, need to be directed: assigned useful work in order to earn their keep. Finally, if there are negative-impact members of the team, and as a last resort if they cannot be improved, they must be removed from the team (“eject”). There aren’t static percentages that apply to these three jobs, and people can move from one category into another. Ideally, a manager would seek to reform low performers before ejecting them, and “direct” the middling performers toward work that improves their skills and engages them more, thus bringing them into the “protect” group. In the ideal world, the manager’s job is 100% “protect”, because I don’t believe that people are intrinsically disengaged and mediocre (i.e. need to be directed). In the real world and on an average team, it’s probably 35% protect, 63% direct, and 2% eject.

Which of these three jobs do managers prefer? Where is their bias? Is it toward directing or protecting? I may be going out on a limb here, but I think that almost no one enjoys the “eject” job. It’s horrible to have to fire someone. It deserves to be done as a last resort, and while some bemoan that the decision to fire someone is often procrastinated, I prefer for it to be that way than for firing (especially if one cannot afford a generous severance) to be taken lightly. Where I think there is confusion about the manager’s role is between the other two jobs: direct versus protect. People who excel at one of these jobs tend to be bad at the other. Mediocre managers tend to manage up and to the short term, which favors the “direct” job: get the executives what they want, quickly and with no hiccups. Good managers tend to favor the long term and recognize the value of rapport with their best subordinates, so they’re willing to take on the “protect” job: expending effort and political capital to protect the good.

It’s probably not surprising that, over time, the most talented managers end up with the most talented subordinates, and vice versa. In the “protect, direct, eject” framework, this makes sense. The best managers generally want to be protectors and mentors, and they get their pick of people reporting to them, and they get people who don’t need to be told what to do so much as protected from unintended consequences of over-performance. Mediocre managers tend to be “direct”-heavy, and end up with people who need to directed. Finally, the worst managers tend to be “ejectors” (either because they lack the political clout to keep their employees’ jobs, or because they toss blame for their own mistakes, or even because they enjoy firing people) and would be predicted to end up with the worst subordinates, where their job seems to be implementing their terminations (a job that no one wants to do). This seems like a efficient arrangement, though. Shouldn’t talented subordinates be mapped to protectors and mediocre ones be mapped to directors. So what goes wrong?

The first issue is the lack of self-correction. Every system that must evaluate people makes errors, especially if it does so early on (as in the case of most companies, which assign managers on the first day). Moreover, people change over time. I don’t believe that there are people who are inflexibly of the “needs to be directed” or “needs to be ejected” type; people are mostly context, and impressively capable at improving themselves given the right opportunity and encouragement. Managers who are oriented toward directing (i.e. they see their job as telling people what to do) are likely to end up in conflict with strong, independent subordinates. That doesn’t end well. It also goes poorly when capable people are placed under ejectors, as can happen early in their careers. This is what’s behind the Welch Effect: the people most likely to be fired under stack ranking are junior members of underperforming teams (i.e. teams run by ejectors) and it is pathological because, being junior, they typically had the least to do with the team’s underperformance; they’re essentially fired at random.

Reading my assessment, it probably seems obvious that most people are going to consider themselves (as reports into a manager) as being in the “protect” category. Few will admit that they belong in the “direct” category as if it were a native trait of them, and I agree that it’s not a native trait. More often, it comes down to relative priorities. Some people want to take on bigger challenges and need the political support (and protection) that will enable them to do so. Others are happy being told what to do, so long as they can go home at 5:00 and their job duties don’t change too frequently or in an adverse way. Not everyone values autonomy, and thats OK. There’s nothing wrong with either approach to work, and those who prioritize comfort over achievement (which is completely reasonable, especially in an environment where achievement won’t be rewarded commensurate with the compromise of comfort) are often valuable people to have, so long as they’re properly directed. It’s not that such people are inflexibly in a “mediocre worker” category that requires them to be directed more than protected. It’s that their needs and capabilities at a certain time put them there, although a good manager will try to direct them, if they wish to go toward it, up to a higher level of competence; i.e. the “protect” group.

There is, however, a numerical mismatch between the subordinates who are better off protected than directed, and the inclinations of middle managers as a category: there are more talented subordinates (the “protect” category) than managers who view themselves primarily as protectors, and fewer mediocre subordinates (the “direct” category) than managers inclined to direct. Because managers are often rewarded for managing up, and because most corporate executives are in positions with no accountability, those who are picked for the management track are often those who are inclined to direct rather than they are those who would protect. This, I think, is why middle management gets such a bad name: it’s associated with those who value control over achievement. As I recounted in the last essay, there’s a selection process that favors negative traits. Middle management is often defective because the traits that make a person good at managing up are a readiness to control others and to furtively oppose the interests of one’s subordinates. This results in a category of people who are unduly inclined to distrust (and “direct”) while offering little or none of the protection or opportunity that would enable them to manage high performers. In fact, to many of them, the idea that they should protect their subordinates is a foreign concept: as they see it, their subordinates exist to serve them. Of course, it’s not just the incompetence of many in the role that lead to middle management’s negative reputation. As I discussed, front-line managers are often “fall guys” for executive malfeasance and incompetence, as their lack of executive-level social status makes them easy to scapegoat. Peoples’ desire to identify with power (which middle managers often don’t have, while the executives do) leaves them more than ready to dislike their immediate bosses over failures that are actually the fault of higher-ranking people.

Scrum and software management

The software industry has been trying to disintermediate middle management for decades, to mostly negative results. I’ll readily agree that middle management is often a weak link in organizations, and that the quality of people tapped for it is sometimes low (but not as low as that of the people who fail out of private equity and into investor- or executive-level positions in tech). Even still, middle management is often necessary. The job is important. Someone has to protect new talent from the sharp knives of executives and other managers, direct the middling performers so the company functions, and eject the rare person whose behavior is so toxic as to threaten the functioning of the organization. These jobs can’t be delegated to a “self-managing” (or, just as often, emergently managed) group. Protecting is a job that no one in a “flat” organization has the authority to take on, except for the executives (you know, the people who keeps the hierarchy flat and signs the paychecks) who are often too far removed from the bottom to do it. Directing, on the other hand, becomes a job that many people try to do; without clear leaders of the group, you get many who think they’re the leaders and will try to tell others what to do. The long term result of this is incoherence and tension, until the pushiest people gain credibility (usually by taking credit for others’ work) and win favor from above, and become de facto managers. Finally, ejecting is a job that either no one does (because it’s undesirable, except for psychopaths) or that attracts the worst kinds of people, and is then done in a toxic way.

Where do Scrum and Agile fit into all of this? Naively, they appear like a mechanism to remove middle managers from the equation, or push “people managers” off to the side. I’ll certainly agree that there’s a noxious, deep conflict of interest between people management and project (or product) management, because what is best for those being “people-managed” might be bad for the project (i.e. for a talented subordinate to transfer to a team more in line with his interests is something that a middle manager, held accountable for delivery on that team’s project rather than excellent people-managing, might be averse to letting happen). Many middle managers abuse their power, as a single-point-of-failure (SPOF) in a report’s career at the company, in order to get project-related goals (because, typically, they’re evaluated based on visible deliverables rather than effective people managing) done through intimidation, and I think that has led a couple generations of software engineers to conclude that most middle managers are worthless parasites who only manage up. The problem, however, has more to do with how those managers are evaluated (i.e. their need to “manage up”) and that it forces them to favor directing over protecting.

Despite the flaws of the former, when you replace the institution of middle management with “Agile” or Scrum and the illusion of “flat hierarchy”, you rarely get an improvement. Instead, you get emergent middle management and unexpected, unstable power centers. Agile and Scrum ignore, outright, the goal of protecting subordinates (or, sorry, “Scrum team members”). In fact, the often-stated point of the “user stories” and violent transparency and superficial accountability is to make it impossible for middle managers to protect their reports. Agile is, foremost, about directing and ejecting, but it replaces the single higher-up tyrant with a hundred mildly tyrannical neighbors. The formal police state vanishes in favor of a fuck-your-buddy system where instead of having one career-SPOF in a middle manager, you have tens of career SPOFs. The actual effect of this is to delegate the job that ruined middle management– that of”managing up” into executives– to the whole team.

The big problem with this change is that many executives are narcissistic assholes: probably 30 to 50 percent do more harm than good. It comes with the job. As I mentioned earlier, managers have a job (to manage) while executives are effectively unaccountable, because an executive’s real job is to maintain the social status that places them within the corporation’s nobility. So, companies get a mix of good and bad executives and are rarely very swift in removing the bad ones. A good manager with institutional knowledge will know which executives are friendly and which ones are toxic and best avoided. She’ll steer her reports toward interactions with the good executives, and thereby improve her ability and the ability of her team to get things done, while shooing them away from the bad executives who’ll meddle or do worse. Take away that middle manager and replace her with over-eager, politically clueless young developers on a “Scrum team”, and you get chaos. You get no one looking out for that team, because no one can look out for that team. From above, the team is exposed, not protected.

It’s superficially appealing to throw middle managers overboard. It’s a tough job and a thankless one and there are a huge number of people out there doing it badly. The whole “startup” craze (in which young people have been led to overvalue jobs at companies whose middling tier is comprised of “founders” managing up into VCs, rather than traditional middle managers) is based on this appeal: why work at Goldman Sachs and report to “some faceless middle manager” when you can join a startup with a “flat hierarchy” and report directly to the CTO (…and, in truth, have your career dictated by a 27-year-old product manager whom the CTO has known for 8 years and whom he trusts more than he will ever trust you)? I also think that the reflexive rejection of the idea of middle management– why it exists, why it is important, and why it deserves so much more respect than it is given when it is done well– needs to be tossed aside. We haven’t figured out a way to replace it, and the odds are that we won’t do so in this generation. What we do know is that these asinine “methodologies” that trick a team into middle-managing itself (poorly) have got to go.


Middle management is, perhaps surprisingly, both the solution and the problem. It exists. It always will exist. Executives are disinclined to “protect the good” within the organization, and too removed from day-to-day problems to evaluate individual people, in any case. Even at modest size for an organization, the necessary jobs of management– protect, direct, and (as a last resort) eject– cannot be handled by a single person or an executive in-crowd. Pretending that management is an outmoded job is something we do at our peril. Yes, we must acknowledge that it has mostly been done poorly in software for the past 20 years (and possibly for much longer). However, rather than declaring the whole concept obsolete, we have to acknowledge that it is a necessary function– and figure out how to get good at it.

Why “Agile” and especially Scrum are terrible

Follow-up post: here

It’s probably not a secret that I dislike the “Agile” fad that has infested programming. One of the worst varieties of it, Scrum, is a nightmare that I’ve seen actually kill companies. By “kill” I don’t mean “the culture wasn’t as good afterward”; I mean a drop in the stock’s value of more than 85 percent. This shit is toxic and it needs to die yesterday. For those unfamiliar, let’s first define our terms. Then I’ll get into why this stuff is terrible and often detrimental to actual agility. Then I’ll discuss a single, temporary use case under which “Agile” development actually is a good idea, and from there explain why it is so harmful as a permanent arrangement.

So what is Agile?

The “Agile” fad grew up in web consulting, where it had a certain amount of value: when dealing with finicky clients who don’t know what they want, one typically has to choose between one of two options. The first is to manage the client: get expectations set, charge appropriately for rework, and maintain a relationship of equality rather than submission. The second is to accept client misbehavior (as, say, many graphic designers must) and orient one’s work flow around client-side dysfunction. Programmers tend not to be good at the first option– of managing the client– because it demands too much in the way of social acumen, and the second is appealing to a decision-maker who’s recently been promoted to management and won’t have to do any of the actual work.

There’s a large spectrum of work under the name of “consulting”. There are great consulting firms and there are body shops that taken on the lowest kind of work. Companies tend to give two types of work to consultancies: the highest-end stuff that they might not have the right people for, and the low-end dreck work that would be a morale-killer if allocated to people they’d actually like to retain for a year or few. Scrum is for the body shops, the ones that expect programmers to suffer when client relationships are mismanaged and that will take on a lot of low-end, career-incoherent work that no one wants to do.

So what are Scrum and “Agile”? I could get into the different kinds of meetings (“retrospective” and “backlog grooming” and “planning”) or the theory, but the fundamental unifying trait is violent transparency, often one-sided. Programmers are, in many cases, expected to provide humiliating visibility into their time and work, meaning that they must play a side game of appearing productive in addition to their actual job duties. Instead of working on actual, long-term projects that a person could get excited about, they’re relegated to working on atomized, feature-level “user stories” and often disallowed to work on improvements that can’t be related to short-term, immediate business needs (often delivered from on-high). Agile eliminates the concept of ownership and treats programmers as interchangeable, commoditized components.

In addition to being infantilizing and repellent, Scrum induces needless anxiety about microfluctuations in one’s own productivity. The violent transparency means that, in theory, each person’s hour-by-hour fluctuations are globally visible– and for no good reason, because there’s absolutely no evidence that any of this snake oil actually makes things get done quicker or better in the long run. For people with anxiety or mood disorders, who generally perform well when measured on average long-term productivity, but who tend to be most sensitive to invasions of privacy, this is outright discriminatory.

Specific flaws of “Agile” and Scrum

1. Business-driven engineering.

“Agile” is often sold in comparison to an equally horrible straw man approach to software design called “Waterfall”. What Waterfall and Agile share (and a common source of their dysfunction) is that they’re business-driven development. In Waterfall, projects are defined first by business executives, design is done by middle managers and architects, and then implementation and operations and testing are carried out by multiple tiers of grunts, with each of these functions happening in a stage that must be completed before the next may begin. Waterfall is notoriously dysfunctional and no Agile opponent would argue to the contrary. Under Waterfall, engineers are relegated to work on designs and build systems after the important decisions have all been made and cannot be unmade, and no one talented is motivated to take that kind of project.

Waterfall replicates the social model of a dysfunctional organization with a defined hierarchy. The most interesting work is done first and declared complete, and the grungy details are passed on to the level below. It’s called “Waterfall” because communication goes only one way. If the design is bad, it must be implemented anyway. (The original designers have probably moved to another project.) Agile, then, replicates the social model of a dysfunctional organization without a well-defined hierarchy. It has engineers still quite clearly below everyone else: the “product owners” and “scrum masters” outrank “team members”, who are the lowest of the low. Its effect is to disentitle the more senior, capable engineers by requiring them to adhere to a reporting process (work only on your assigned tickets, spend 5-10 hours per week in status meetings) designed for juniors. Like a failed communist state that equalizes by spreading poverty, Scrum in its purest form puts all of engineering at the same low level: not a clearly spelled-out one, but clearly below all the business people who are given full authority to decide what gets worked on.

Agile increases the feedback frequency while giving engineers no real power. That’s a losing bargain, because it means that they’re more likely to jerked around or punished when things take longer than they “seem” they should take. These decisions are invariably made by business people who will call shots based on emotion rather than deep insight into the technical challenges or the nature of the development.

Silicon Valley has gotten a lot wrong, especially in the past five years, but one of the things that it got right is the concept of the engineer-driven company. It’s not always the best for engineers to drive the entire company, but when engineers run engineering and set priorities, everyone wins: engineers are happier with the work they’re assigned (or, better yet, self-assigning) and the business is getting a much higher quality of engineering.

2. Terminal juniority

“Agile” is a culture of terminal juniority, lending support to the (extremely misguided) conception of programming as a “young man’s game”, even though most of the best engineers are not young and quite a few are not men. Agile has no exit strategy. There’s no “We won’t have to do this once we achieve ” clause. It’s designed to be there forever: the “user stories” and business-driven engineering and endless status meetings will never go away. Architecture and R&D and product development aren’t part of the programmer’s job, because those things don’t fit into atomized “user stories” or two-week sprints. So, the sorts of projects that programmers want to take on, once they master the basics of the field, are often ignored, because it’s either impossible to atomize them or it’s far more difficult to do so than just to do the work.

There’s no role for an actual senior engineer on a Scrum team, and that’s a problem, because many companies that adopt Scrum impose it on the whole organization. Aside from a move into management, there is the option of becoming a “Scrum master” responsible for imposing this stuff on the young’uns: a bullshit pseudo-management role without power. The only way to get off a Scrum team and away from living under toxic micromanagement is to burrow further into the beast and impose the toxic micromanagement on other people. What “Agile” and Scrum say to me is that older, senior programmers are viewed as so inessential that they can be ignored, as if programming is a childish thing to be put away before age 35. I don’t agree with that mentality. In fact, I think it’s harmful; I’m in my early 30s and I feel like I’m just starting to be good at programming. Chasing out our elders, just because they’re seasoned enough to know that this “Agile”/Scrum garbage has nothing to do with computer science and that it has no value, is a horrible idea.

3. It’s stupidly, dangerously short-term. 

Agile is designed for and by consulting firms that are marginal. That is, it’s for firms that don’t have the credibility that would enable them to negotiate with clients as equals, and that are facing tight deadlines while each client project is an existential risk. It’s for “scrappy” underdogs. Now, here’s the problem: Scrum is often deployed in large companies and funded startups, but people join those (leaving financial upside on the table, for the employer to collect) because they don’t want to be underdogs. No one wants to play from behind unless there’s considerable personal upside in doing so. “Agile” in a corporate job means pain and risk without reward.

When each client project represents existential or severe reputational risk, Agile might be the way to go, because a focus on short-term iterations is useful when the company is under threat and there might not be a long term. Aggressive project management makes sense in an emergency. It doesn’t make sense as a permanent arrangement; at least, not for high-talent programmers who have less stressful and more enjoyable options.

Under Agile, technical debt piles up and is not addressed because the business people calling the shots will not see a problem until it’s far too late or, at least, too expensive to fix it. Moreover, individual engineers are rewarded or punished solely based on the completion, or not, of the current two-week “sprint”, meaning that no one looks out five “sprints” ahead. Agile is just one mindless, near-sighted “sprint” after another: no progress, no improvement, just ticket after ticket.

4. It has no regard for career coherency.

Atomized user stories aren’t good for engineers’ careers. By age 30, you’re expected to be able to show that you can work at the whole-project level, and that you’re at least ready to go beyond such a level into infrastructure, architecture, research, or leadership. While Agile/Scrum experience makes it somewhat easier to get junior positions, it eradicates even the possibility of work that’s acceptable for a mid-career or senior engineer.

In an emergency, whether it’s a consultancy striving to appease an important client or a corporate “war room”, career coherency can wait. Few people will refuse to do a couple weeks of unpleasant or career-incoherent work if it’s genuinely important to the company where they work. If nothing else, the importance of that work confers a career benefit. When there’s not an emergency, however, programmers expect their career growth to be taken seriously and will leave. Using “fish frying” as a term-of-art for grunt work that no one enjoys, and that has no intrinsic career value to any one, there’s enough career value (internal and external to the organization) in emergency or high-profile fish frying that people don’t mind doing it. You can say, “I was in the War Room and had 20 minutes per day with the CEO” and that excuses fish frying. It means you were valued and important. Saying, “I was on a Scrum team” says, “Kick me”. Frying fish because you were assigned “user stories” shows that you were seen as a loser.

5. Its purpose is to identify low performers, but it has an unacceptably false positive rate. 

Scrum is sold as a process for “removing impediments”, which is a nice way of saying “spotting slackers”. The problem with it is that it creates more underperformers than it roots out. It’s a surveillance state that requires individual engineers to provide fine-grained visibility into their work and rate of productivity. This is defended using the “nothing to hide” argument, but the fact is that, even for pillar-of-the-community high performers, a surveillance state is an anxiety state. The fact of being observed changes the way people work– and, in creative fields, for the worse.

The first topic coming to mind here is status sensitivity. Programmers love to make-believe that they’ve transcended a few million years of primate evolution related to social status, but the fact is: social status matters, and you’re not “political” if you acknowledge the fact. Older people, women, racial minorities, and people with disabilities tend to be status sensitive because it’s a matter of survival for them. Constant surveillance into one’s work indicates a lack of trust and low social status, and the most status-sensitive people (even if they’re the best workers) are the first ones to decline.

Scrum and “Agile” are designed, on the other hand, for the most status-insensitive people: young, privileged males who haven’t been tested, challenged, or burned yet at work. It’s for people who think that HR and management are a waste of time and that people should just “suck it up” when demeaned or insulted.

Often, it’s the best employees who fall the hardest when Agile/Scrum is introduced, because R&D is effectively eliminated, and the obsession with short-term “iterations” or sprints means that there’s no room to try something that might actually fail.

The truth about underperformers is that you don’t need “Agile” to find out who they are. People know who they are. The reason some teams get loaded down with disengaged, incompetent, or toxic people is that no one does anything about them. That’s a people-level management problem, not a workflow-level process problem.

6. The Whisky Googles Effect

There seems to be some evidence that Agile and Scrum can nudge the marginally incompetent into being marginally employable. I call this the Whisky Goggles Effect: it turns the 3s and 4s into 5s, but it makes you so sloppy that the 7s and 9s want nothing to do with you. Unable to get their creative juices flowing under aggressive micromanagement, the best programmers leave.

From the point of view of a manager unaware of how software works, this might seem like an acceptable trade: a few “prima donna” 7+ leave under the Brave New Scrum, while the 3s and 4s become just-acceptable 5s. The problem is that the difference between a “7” programmer and a “5” programmer is substantially larger than that between a “5” and a “3”. If you lose your best people and your leaders (who may not be in leadership roles on the org-chart) then the slight upgrade of the incompetents for whom Scrum is designed does no good.

Scrum and Agile play into what I call the Status Profit Bias. Essentially, many people in business judge their success or failure not in objective terms, but based on the status differential achieved. Let’s say that the market value of a “3” level programmer is $50,000 per year and, for a “5” programmer, it’s $80,000. (In reality, programmer salaries are all over the map: I know 3’s making over $200,000 and I know 7’s under $70,000, but let’s ignore that.) Convincing a “5” programmer to take a “3”-level salary (in exchange for startup equity!) is marked, psychologically, not as a mere $30,000 in profit but as a 2-point profit.

Agile/Scrum and the age discrimination culture in general are about getting the most impressive status profits, rather than actual economic profits. The people who are least informed about what social status they “should” have are the young. You’ll find a 22-year-old 6 who thinks that he’s a 3 and who will submit to Scrum, but the 50-year-old 9 is likely to know that she’s a 9 and might begrudgingly take 8.5-level conditions but is not about to drop to a 6. Seeking status profits is, however, extremely short-sighted. There may be a whole industry in bringing in 5-level engineers and treating (and paying) them like 4’s, but under current market conditions, it’s far more profitable to hire an 8 and treat him like an 8.

7. It’s dishonestly sold.

To cover this point, I have to acknowledge that the uber-Agile process known as “Scrum” works under a specific set of circumstances; the dishonest salesmanship is in the indication that this stuff works everywhere, and as a permanent arrangement.

What Scrum is good for

Before the Agile fad, “Scrum” was a term sometimes used for what corporations might also call a “Code Red” or a “War Room emergency”. This is when a cross-cutting team must be built quickly to deal with an unexpected and, often, rapidly-evolving problem. It has no clear manager, but a leader (like a “Scrum Master”) must be elected and given authority and it’s usually best for that person not to be an official “people manager” (since he needs to be as impartial as possible). Since the crisis is short-term, individuals’ career goals can be put on hold. It’s considered a “sprint” because people are expected to work as fast as they can to solve the problem, and because they’ll be allowed to go back to their regular work once it’s over.

There are two scenarios that should come to mind. The first is a corporate “war room”, and if specific individuals (excluding high executives) are spending more than about 6 weeks per year in war-room mode, than something is wrong with the company because emergencies ought to be rare. The second is that of a consultancy struggling to establish itself, or one that is bad at managing clients and establishing an independent reputation, and must therefore operate in a state of permanent emergency.

Two issues

Scrum and Agile represent acknowledgement of the idea that emergency powers must sometimes be given to “take-charge” leaders who’ll do whatever they consider necessary to get a job done, leaving debate to happen later. A time-honored problem with emergency powers is that they often don’t go away. In many circumstances, those empowered by an emergency see fit to prolong it. This is, most certainly, a problem with management. Dysfunction and emergency require more managerial effort than a well-run company in peace time.

It is also more impressive for an aspiring demagogue (a “scrum master”?) to be a visible “dragonslayer” than to avoid attracting dragons to the village in the first place. The problem with Scrum’s aggressive insistence on business-driven engineering is that it makes a virtue (“customer collaboration”) out of attracting, and slaying, dragons (known as “requirements”) when it might be more prudent not to lure them out of their caves in the first place.

“Agile” and Scrum glorify emergency. That’s the first problem with them. They’re a reinvention of what the video game industry calls “crunch time”. It’s not sustainable. An aggressive focus on individual performance, a lack of concern for people’s career objectives in work allocation, and a mandate that people work only on the stated top priority, all have value in a short-term emergency but become toxic in the long run. People will tolerate those changes if there’s a clear point ahead when they’ll get their autonomy back.

The second issue is that these practices are sold dishonestly. There’s a whole cottage industry that has grown up around teaching companies to be “Agile” in their software development. The problem is that most of the core ideas aren’t new. The terminology is fresh, but the concepts are mostly outdated, failed “scientific management” (which was far from scientific, and didn’t work).

If the downsides and upsides of “Agile” and Scrum were addressed, then I wouldn’t have such a strong problem with the concept. If a company has a team of only junior developers and needs to deliver features fast, it should consider using these techniques for a short phase, with the plan to remove them as its people grow and timeframes become more forgiving. However, if Scrum were sold for what it is– a set of emergency measures that can’t be used to permanently improve productivity– then there would be far fewer buyers for it, and the “Agile” consulting industry wouldn’t exist. Only through a dishonest representation of these techniques (glorified “crunch time”, packaged as permanent fixes) are they made salable.

Looking forward

It’s time for most of “Agile” and especially Scrum to die. These aren’t just bad ideas. They’re more dangerous than that, because there’s a generation of software engineers who are absorbing them without knowing any better. There are far too many young programmers being doomed to mediocrity by the idea that business-driven engineering and “user stories” are how things have always been done. This ought to be prevented; the future integrity of our industry may rely on it. “Agile” is a bucket of nonsense that has nothing to do with programming and certainly nothing to do with computer science, and it ought to be tossed back into the muck from which it came.

The tyranny of the friendless

I’ve written a lot about the myriad causes of organizational decay. I wrote a long series on the topic, here. In most of my work, I’ve written about decay as an inevitable, entropic outcome driven by a number of forces, many unnamed and abstract, and therefore treated as inexorable ravages of time.

However, I’ve recently come to the realization that organizational decay is typically dominated by a single factor that is easy to understand, being so core to human sociology. While it’s associated with large companies, it can set in when they’re small. It’s a consequence of in-group exclusivity. Almost all organizations function as oligarchies, some with formal in-crowds (government officials or titled managers) and some without. If this in-crowd develops a conscious desire to exclude others, it will select and promote people who are likely to retain and even guard its boundaries. Only a certain type of person is likely to do this: friendless people. Those who dislike, and are disliked by, the out-crowd are unlikely to let anyone else in. They’re non-sticky: they come with a promise of “You get just me”, and that makes them very attractive candidates for admission into the elite.

Non-stickiness is seen as desirable from above– no one wants to invite the guy who’ll invite his whole entourage– but, in the business world, it’s negatively correlated with pretty much any human attribute that could be considered a virtue. People who are good at their jobs are more likely to be well-liked and engaged and form convivial bonds. People who are socially adept tend to have friends at high levels and low. People who care a lot about social justice are likely to champion the poor and unpopular. A virtuous person is more likely to be connected laterally and from “below”. That shouldn’t count against a person, but for an exclusive club that wants to stay exclusive, it does. What if he brings his friends in, and changes the nature of the group? What if his conscience compels him to spill in-group secrets? For this reason, the non-sticky and unattached are better candidates for admission.

The value that executive suites place on non-stickiness is one of many possible explanations for managerial mediocrity as it creeps into an organization. Before addressing why I think my theory is right, I need to analyze three of the others, all styled as “The $NAME Principle”.

The “Peter Principle” is the claim that people are promoted up to their first level of incompetence, and stay there. It’s an attractive notion, insofar as most people have seen it in action. There are terminal middle managers who don’t seem like they’ll ever gain another step, but who play politics just well enough not to get fired. (It sucks to be beneath one. He’ll sacrifice you to protect his position.) That said, I find the Peter Principle, in general, to be mostly false because of its implicit belief in corporate meritocracy. What is most incorrect about it is the belief that upper-level jobs are harder or more demanding than those in the middle. In fact, there’s an effort thermocline in almost every organization. Above the effort thermocline, which is usually the de facto delineation between mere management roles and executive positions, jobs get easier and less accountable with increasing rank. If the one-tick-late-but-like-clockwork Peter Principle were the sole limiting factor on advancement, you’d expect that those who pass the the thermocline would all become CEOs, and that’s clearly not the case. While merit and hard work are required less and less with increasing rank, political resistance amplifies just because there are so few of the top jobs that there’s bound to be competition. Additionally, even below the effort thermocline there are people employed below their maximum level of competence because of political resistance. The Peter Principle is too vested in the idea of corporate meritocracy to be accurate.

Scott Adams has proposed an alternative theory of low-merit promotion: the Dilbert Principle. According to it, managers are often incompetent line workers who were promoted “out of harm’s way”. I won’t deny that it exists in some organizations, although it usually isn’t applied within critical divisions of the company. When incompetents are knowingly promoted, it’s usually a dead-end pseudo-promotion that comes with a small pay raise and a title bump, but lateral movement into unimportant work. That said, its purpose isn’t just to limit damage, but to make the person more likely to leave. If someone’s not bad enough to fire but not especially good, gilding his CV with a fancy title might invite him to (euphemism?) succeed elsewhere… or, perhaps, not-succeed elsewhere but be someone else’s problem. All of that said, this kind of move is pretty rare. Incompetent people who are politically successful are not known to be incompetent, because politics-of-performance outweighs actual performance ten-to-one in terms of making reputations, and those who have a reputation for incompetence are those who failed politically, and they don’t get exit promotions. They just get fired.

The general idea that people are made managers to limit their damage potential is false because the decision to issue such promotions is one that would, by necessity, be made by other managers. As a tribe, managers have far too much pride to ever think the thought, “he’s incompetent, we must make him one of us”. Dilbert-style promotions occasionally occur and incompetents definitely get promoted, but the intentional promotion of incompetents into important roles is extremely rare.

Finally, there’s the Gervais Principle, developed by Venkatesh Rao, which asserts that organizations respond both to performance and talent, but sometimes in surprising ways. Low-talent high-performers (“eager beavers” or “Clueless”) get middle management roles where they carry the banner for their superiors, and high-talent low-performers (“Sociopaths”) either get groomed for upper-management or get fired. High-talent high-performers aren’t really addressed by the theory, and there’s a sound reason why. In this case, the talent that matters most is strategy: not working hard necessarily, but knowing what is worth working on. High talent people will, therefore, work very hard when given tasks appropriate to their career goals and desired trajectory in the company, but their default mode will be to slack on the unimportant make-work. So a high-talent person who is not being tapped for leadership will almost certainly be a low performer: at least, on the assigned make work that is given to those not on a career fast track.

The Gervais/MacLeod model gives the most complete assessment of organizational functioning, but it’s not without its faults. Intended as satire, the MacLeod cartoon gave unflattering names to each tier (“Losers” at the bottom, “Clueless” in middle-management, and “Sociopaths” at the top). It also seems to be a static assertion, while the dynamic behaviors are far more interesting. How do “Sociopaths” get to the top, since they obviously don’t start there? When “Clueless” become clued-in, where do they go? What do each of these people really want? For how long do “Losers” tolerate losing? (Are they even losing?) Oh, and– most importantly for those of us who are studying to become more like the MacLeod Sociopaths (who aren’t actually sociopathic per se, but risk-tolerant, motivated, and insubordinate)– what determines which ones are groomed for leadership and which ones are fired?

If there’s an issue with the Gervais Principle, it’s that it asserts too much intelligence and intent within an organization. No executive ever says, “that kid looks like a Sociopath; let’s train him to be one of us.” The Gervais model describes the stable state of an organization in advanced decline, but doesn’t (in my opinion) give full insight into why things happen in the way that they do.

So I’m going to offer a fourth model of creeping managerial mediocrity. Unlike the Peter Principle, it doesn’t believe in corporate meritocracy. Unlike the Dilbert Principle, it doesn’t assert that managers are stupid and unimportant (because we know both to be untrue) or consider their jobs to be such. Unlike the Gervais Principle, it doesn’t believe that organizations knowingly select for cluelessness or sociopathy (although that is sometimes the case).

  • The Lumbergh Principle: an exclusive sub-organization, such as an executive suite, that wishes to remain exclusive will select for non-stickiness, which is negatively correlated with most desirable personal traits. Over time, this will degrade the quality of people in the leadership ranks, and destroy the organization.

If it’s not clear, I named this one after Bill Lumbergh from Office Space. He’s uninspiring, devoid of charisma, and seems to hold the role of middle manager for an obvious reason: there is no chance that he would ever favor his subordinates’ interests over those of upper management. He’s friendless, and non-sticky by default. He wouldn’t, say, tell an underling that he’s undervalued and should ask for a 20% raise, or give advance notice of a project cancellation or layoff so a favored subordinate can get away in time. He’ll keep his boss’s secrets because he just doesn’t give a shit about the people who are harmed by his doing so. No one likes him and he likes no one, and that’s why upper management trusts him.

Being non-sticky and being incompetent aren’t always the same thing, but they tend to correlate often enough to represent a common case (if not the most common case) of an incompetent’s promotion. Many people who are non-sticky are that way because they’re disliked and alienating to other people, and while there are many reasons why that might be, incompetence is a common one. Good software engineers are respected by their peers and tend to make friends at the bottom. Bad software engineers who play politics and manage up will be unencumbered by friends at the bottom.

To be fair, the desire to keep the management ranks exclusive is not the only reason why non-stickiness is valued. A more socially acceptable reason for it is that non-sticky people are more likely to be “fair” in an on-paper sense of the word. They don’t give a damn about their subordinates, their colleagues, and possible future subordinates, but they don’t-give-a-damn equally. Because of this, they support the organization’s sense of itself as a meritocracy. Non-sticky people are also, in addition to being “fair” in a toxic way that ultimately serves the needs of upper management only, quite consistent. As corporations would rather be consistent than correct– firing the wrong people (i.e. firing competent people) is unfortunate, but firing inconsistently opens the firm to a lawsuit– they are attractive for this reason as well. You can always trust the non-sticky person, even if he’s disliked by his colleagues for a good reason, to favor the executives’ interests above the workers’. The fact that most non-sticky people absolutely suck as human beings is held to be irrelevant.


As I get older and more experienced, I’m increasingly aware that there’s a lot of diversity in how organizations run themselves. We’re not condemned to play out roles of “Loser”, “Clueless”, or “Sociopath”. So it’s worth acknowledging that there are a lot of cases in which the Lumbergh Principle doesn’t apply. Some organizations try to pick competent leaders, and it’s not inevitable that an organization develops such a contempt for its own workers as to define the middle-management job in such a start way. Also, the negativity that is often directed at middle management fails to account for the fact that upper management almost always has to pass through that tier in some way or another. Middle management gets its stigma because of the terminal middle managers with no leadership skills; the ones promoted into those roles because, while defective, their superiors could trust them. However, there are many other reasons why people pass through middle management roles, or take them on because they believe that the organization needs them to do so.

The Lumbergh Principle only takes hold in a certain kind of organization. That’s the good news. The bad news is that most organizations are that type. It has to do with internal scarcity. At some point, organizations decide that there’s a finite amount of “goodness”, whether we’re talking about autonomy or trust or credibility, that exists and it leaves people to compete for these artificially limited benefits. Employee stack ranking is a perfect example of this: for one person to be a high performer, another must be marked low. When a scarcity mentality sets in, R&D is slashed and business units are expected to compete for internal clients in order to justify themselves, which means that these “intrapreneurial” efforts face the worst of both worlds between being a startup and being in a large organization. It invariably gets ugly, and a zero-sum mentality takes hold. At this point, the goal of the executive suite becomes maintaining position rather than growing the company, and invitation into the inner circle (and the concentric circles that comprise various tiers of management) are given only to replace outgoing personnel, with a high degree of preference given to those who can be trusted not to let conscience get in in the way of the executives’ interests.

One might expect that startups would be a way out. Is that so? The answer is: sometimes. It is generally better, under this perspective, for an organization to be growing than stagnant. It’s probably better, in many cases, to be small. At five people, it’s far more typical to see the “live or die as a group” mentality than the internal adversity that characterizes large organizations. All of that said, there are quite a number of startups that already operate under a scarcity mentality, even from inception. The VCs want it that way, so they demand extreme growth and risk-seeking on (relative to the ambitions they require) a shoestring budget and call it “scrappy” or “lean”. The executives, in turn, develop a culture of stinginess wherein small expenses are debated endlessly. Then the middle managers bring in that “Agile”/Scrum bukkake in which programmers have to justify weeks and days of their own fucking working time in the context of sprints and iterations and glass toys. One doesn’t need a big, established company to develop the toxic scarcity mentality that leads to the Lumbergh Effect. It can start at a company’s inception– something I’ve seen on multiple occasions. In that case, the Lumbergh Effect exists because the founders and executives have a general distrust for their own people. That said, the tendency of organizations (whether democratic or monarchic on paper) toward oligarchy means that they need to trust someone. Monarchs need lieutenants, and lords need vassals. The people who present themselves as natural candidates for promotion are the non-sticky ones who’ll toss aside any allegiances to the bottom. However, those people are usually non-sticky because they’re disliked, and they’re usually disliked because they’re unlikeable and incompetent. It’s through that dynamic– not intent– that most companies end up being middle-managed (and, after a few years, upper-managed) by incompetents.

Advanced Lumberghism

What makes the Lumbergh Principle so deadly is the finality of it. The Peter Principle, were it true, would admit an easy solution: just fire the people who’ve plateaued. (Some companies do exactly that, but it creates a toxic culture of stack-ranking and de facto age discrimination.) The Dilbert Principle has a similar solution: if you are going to “promote” someone into a dead end, as a step in managing that person out, make sure to follow through. As for the Gervais Principle, it describes an organization that is already in an advanced state of dysfunction (but, it is so useful because most organizations are in such states) and, while it captures the static dynamics (i.e. the microstate transitions and behaviors in a certain high-entropy, degenerate macrostate) it does not necessarily tell us why decay is the norm for human organizations. I think that the Lumbergh Effect, however, does give us a cohesive sense of it. It doesn’t go far enough to say that “the elite” is the problem, because while elites are generally disliked, they’re not always harmful. The Lumbergh Effect sets in when the elite’s desire to protect its boundaries results in the elevation of a middling class of non-virtuous people, and as such people become the next elite (through attrition in the existing one) the organization falls to pieces. We now know, at least in a large proportion of cases, the impulses and mechanics that bring an organization to ruin.

Within organizations, there’s always an elite. Governments have high officials and corporations have executives. We’d like for that elite to be selected based on merit, but even people of merit dislike personal risk and will try to protect their positions. Over time, elites form their own substructures, and one of those is an outer “shell”. The lowest-ranking people inside that elite, and the highest-ranking people outside of it who are auditioning to get in, take on guard duty and form the barrier. Politically speaking, the people who live at that shell (not the cozy people inside or the disengaged outsiders who know they have no chance of entering) will be the hardest-working (again, an effort thermocline) at defining and guarding the group’s boundaries. Elites, therefore, don’t recruit for their “visionary” inner ranks or their middling directorate, because you have to serve at the shell before you have a chance of getting further in. Rather, they recruit guards: non-sticky people who’ll keep the group’s barriers (and its hold over the resources, information, and social access that it controls) impregnable to outsiders. The best guards, of course, are those who are loyal upward because they have no affection in the lateral or downward directions. And, as discussed, such people tend to be that way because no one likes them where they are. That this leads organizations to the systematic promotion of the worst kinds of people should surprise no one.

Can tech fix its broken culture?

I’m going to spoil the ending. The answer is: yes, I think so. Before I tackle that matter, though, I want to address the blog post that led me to write on this topic. It’s Tim Chevalier’s “Farewell to All That” essay about his departure from technology. He seems to have lost faith in the industry, and is taking a break from it. It’s worth reading in its entirety. Please do so, before continuing with my (more optimistic) analysis.

I’m going to analyze specific passages from Chevalier’s essay. It’s useful to describe exactly what sort of “broken culture” we’re dealing with, in order to replace a vague “I don’t like this” with a list of concrete grievances, identifiable sources and, possibly, implementable solutions.

First, he writes:

I have no love left for my job or career, although I do have it for many of my friends and colleagues in software. And that’s because I don’t see how my work helps people I care about or even people on whom I don’t wish any specific harm. Moreover, what I have to put up with in order to do my work is in danger of keeping me in a state of emotional and moral stagnation forever.

This is a common malaise in technology. By the time we’re 30, we’ve spent the better part of three decades building up potential and have refined what is supposed to be the most important skill of the 21st century. We’d like to work on clean energy or the cure for cancer or, at least, creating products that change and save lives (like smart phones, which surely have). Instead, most of us work on… helping businessmen unemploy people. Or targeting ads. Or building crappy, thoughtless games for bored office workers. That’s really what most of us do. It’s not inspiring.

Technologists are fundamentally progressive people. We build things because we want the world to be better tomorrow than it is today. We write software to solve problems forever. Yet most of what our employers actually make us do is not congruent with the progressive inclination that got us interested in technology in the first place. Non-technologists cannot adequately manage technologists because technologists value progress, while non-technologists tend to value subordination and stability.

Open source is the common emotional escape hatch for unfulfilled programmers, but a double-edged sword. In theory, open-source software advances the state of the world. In practice, it’s less clear cut. Are we making programmers (and, therefore, the world) more productive, or are we driving down the price of software and consigning a generation to work on shitty, custom, glue-code projects? This is something that I worry about, and I don’t have the answer. I would almost certainly say that open-source software is very much good for the world, were it not for the fact that programmers do need to make money, and giving our best stuff away for free just might be hurting the price for our labor. I’m not sure. As far as I can tell, it’s impossible to measure that counterfactual scenario.

If there’s a general observation that I’d make about software programmers, and technologists in general, it’s that we’re irresponsibly adding value. We create so much value that it’s ridiculous, and so much that, by rights, we ought to be calling the shots. Yet we find value-capture to be undignified and let the investors and businessmen handle that bit of the work. So they end up with the authority and walk away with the lion’s share; we’re happy if we make a semi-good living. The problem is that value (or money) becomes power, and the bulk of the value we generate accrues not to people who share our progressive values, but to next-quarter thinkers who end up making the world more ugly. We ought to fix this. By preferring ignorance over how the value we generate is distributed and employed, we’re complicit in widespread unemployment, mounting economic and political inequality, and the general moral problem of the wrong people winning.

I don’t spend much time solving abstract puzzles, at least not in comparison to the amount of time I spend doing unpaid emotional labor.

Personally, I care more about solving real-world problems and making peoples’ lives better than I do about “abstract puzzles”. It’s fun to learn about category theory, but what makes Haskell exciting is that its core ideas actually work at making quickly developed code robust beyond what is possible (within the same timeframe; JPL-style C is a different beast) in other languages. I don’t find much use in abstract puzzles for their own sake. That said, the complaint about “unpaid emotional labor” resonates with me, though I might use the term “uncompensated emotional load”. If you work in an open-plan office, you’re easily losing 10-15 hours of your supposedly free time just recovering from the pointless stress inflicted by a bad work environment. I wouldn’t call it an emotional “labor”, though. Labor implies conscious awareness. Recovering from emotional load is draining, but it’s not a conscious activity.

But the tech industry is wired with structural incentives to stay broken. Broken people work 80-hour weeks because we think we’ll get approval and validation for our technical abilities that way. Broken people burn out trying to prove ourselves as hackers because we don’t believe anyone will ever love us for who we are rather than our merit.

He has some strong points here: the venture-funded tech industry is designed to give a halfway-house environment for emotionally stunted (I wouldn’t use the word “broken”, because immaturity is very much fixable) fresh college grads. That said, he’s losing me on any expectation of “love” at the workplace. I don’t want to be “loved” by my colleagues. I want to be respected. And respect has to be earned (ideally, based on merit). If he wants unconditional love, he’s not going to find that in any job under the sun; he should get a dog, or a cat. That particular one isn’t the tech industry’s fault.

Broken people believe pretty lies like “meritocracy” and “show me the code” because it’s easier than confronting difficult truths; it’s as easy as it is because the tech industry is structured around denial.

Meritocracy is a useless word and I think that it’s time for it to die, because even the most corrupt work cultures are going to present themselves as meritocracies. The claim of meritocracy is disgustingly self-serving for the people at the top.

“Show me the code” (or data) can be irksome, because there are insights for which coming up with data is next to impossible, but that any experienced person would share. That said, data- (or code-)driven decision making is better than running on hunches, or based on whoever has the most political clout. What I can’t stand is when I have to provide proof but someone else doesn’t. Or when someone decides that every opinion other than his is “being political” while his is self-evident truth. Or when someone in authority demands more data or code before making a ruling, then goes on to punish you for getting less done on your assigned work (because he really doesn’t want you to prove him wrong). Now those are some shitty behaviors.

I generally agree that not all disputes can be resolved with code or data, because some cases require a human touch and experience; that said, there are many decisions that should be handled in exactly that way: quantitatively. What irks me is not a principled insistence on data-driven decisions, but when people with power acquire the right to make everyone else provide data (which may be impossible to come by) while remaining unaccountable, themselves, to do the same. And many of the macho jerks who overplay the “show me the code” card (because they’ve acquired undeserved political power), when code or data are too costly to acquire. are doing just that.

A culture that considers “too sensitive” an insult is a culture that eats its young. Similarly, it’s popular in tech to decry “drama” when no one is ever sure what the consensus is on this word’s meaning, but as far as I can tell it means other people expressing feelings that you would prefer they stay silent about.

I dislike this behavior pattern. I wouldn’t use the word “drama” so much as political. Politically powerful bad actors are remarkably good at creating a consensus that their political behaviors are apolitical and “meritocratic”, whereas people who disagree with or oppose them are “playing politics” and “stirring up drama”. False objectivity is more dangerous than admitted subjectivity. The first suits liars, the second suits people who have the courage to admit that they are fallible and human.

Personally, I tend to disclose my biases. I can be very political. While I don’t value emotional drama for its own sake, I dislike those who discount emotion. Emotions are important. We all have them, and they carry information. It’s up to us to decide what to do with that information, and how far we should listen to emotions, because they’re not always wise in what they tell us to do. There is, however, nothing wrong with having strong emotions. It’s when people are impulsive, arrogant, and narcissistic enough to let their emotions trample on other people that there is a problem.

Consequently, attempting to shut one’s opponent down by accusing him of being “emotional” is a tactic I’d call dirty, and it should be banned. We’re humans. We have emotions. We also have the ability (most of the time) to put them in place.

“Suck it up and deal” is an assertion of dominance that disregards the emotional labor needed to tolerate oppression. It’s also a reflection of the culture of narcissism in tech that values grandstanding and credit-taking over listening and empathizing.

This is very true. “Suck it up and deal” is also dishonest in the same way that false objectivity and meritocracy are. The person saying it is implicitly suggesting that she suffered similar travails in the past. At the same time, it’s a brush-off that indicates that the other person is of too low status for it to be worthwhile to assess why the person is complaining. It says, “I’ve had worse” followed by “well, I don’t actually know that, because you’re too low on the food chain for me to actually care what you’re going through.” It may still be abrasive to say “I don’t care”, but at least it’s honest.

Oddly enough, most people who have truly suffered fight hard to prevent others from having similar experiences. I’ve dealt with a lot of shit coming up in the tech world, and the last thing I would do is inflict it on someone else, because I know just how discouraging this game can be.

if you had a good early life, you wouldn’t be in tech in the first place.

I don’t buy this one. Some people are passionate about software quality, or about human issues that can be solved by technology. Not everyone who’s in this game is broken.

There certainly are a lot of damaged people working in private-sector tech, and the culture of the VC-funded world attracts broken people. What’s being said here is probably 80 or 90 percent true, but there are a lot of people in technology (especially outside of the VC-funded private sector tech that’s getting all the attention right now) who don’t seem more ill-adjusted than anyone else.

I do think that the Damaso Effect requires mention. On the business side of tech (which we report into) there are a lot of people who really don’t want to be there. Venture capital is a sub-sector of private equity and considered disreputable within that crowd: it’s a sideshow to them. Their mentality is that winners work on billion-dollar private equity deals in New York and losers go to California and boss nerds around. And for a Harvard MBA to end up as a tech executive (not even an investor!) is downright embarrassing. So that Columbia MBA who’s a VP of HR at a 80-person ed-tech startup is not exactly going to be attending reunions. This explains the malaise that programmers often face as they get older: we rise through the ranks and see that, if not immediately, we eventually report up into a set of people who really don’t want to be here. They view being in tech as a mark of failure, like being relegated to a colonial outpost. They were supposed to be MDs at Goldman Sachs, not pitching business plans to clueless VCs and trying to run a one-product company on a shoestring (relative to the level of risk and ambition that it takes to keep investors interested) budget.

That said, there are plenty of programmers who do want to be here. They’re usually older and quite capable and they don’t want to be investors or executives, though they often could get invited to those ranks if they wished. They just love solving hard problems. I’ve met such people; many, in fact. This is a fundamental reason why the technology industry ought to be run by technologists and not businessmen. The management failed into it and would jump back into MBA-Land Proper if the option were extended, and they’re here because they’re the second or third tier that got stuck in tech; but the programmers in tech actually, in many cases, like being here and value what technology can do.

Failure to listen, failure to document, and failure to mentor. Toxic individualism — the attitude that a person is solely responsible for their own success, and if they find your code hard to understand, it’s their fault — is tightly woven through the fabric of tech.

This is spot-on, and it’s a terrible fact. It holds the industry back. We have a strong belief in progress when it comes to improving tools, adding features to a code base, and acquiring more data. Yet the human behaviors that enable progress, we tend to undervalue.

But in tech, the failures are self-reinforcing because failure often has no material consequences (especially in venture-capital-funded startups) and because the status quo is so profitable — for the people already on the inside — that the desire to maintain it exceeds the desire to work better together.

This is an interesting observation, and quite true. The upside goes mostly to the well-connected. Most of the Sand Hill Road game is about taking strategies (e.g. insider trading, market manipulation) that would be illegal on public markets and applying them to microcap private equities over which there are fewer rules. The downside is borne by the programmers, who suffer extreme costs of living and a culture of age discrimination on a promise of riches that will usually never come. As of now, the Valley has been booming for so long that many people have forgotten that crashes and actual career-rupturing failures even exist. In the future… who knows?

As for venture capital, it delivers private prosperity, but its returns to passive investors (e.g. the ones whose money is being invested, as opposed to the VCs collecting management fees) are dreadful. This industry is not succeeding, except according to the needs of the well-connected few. What’s happening is not “so profitable” at all. It’s not actually very successful. It’s just well-marketed, and “sexy”, to people under 30 who haven’t figured out what they want to do with their lives.

I remember being pleasantly amazed at hearing that kind of communication from anybody in a corporate conference room, although it was a bit less nice when the CTO literally replied with, “I don’t care about hurt feelings. This is a startup.”

That one landed. I have seen so many startup executives and founders justify bad behavior with “this is a startup” or “we’re running lean”. It’s disgusting. It’s the False Poverty Effect: people who consider themselves poor based on peer comparison will tend to believe themselves entitled to behave badly or harm others because they feel like it’s necessary in order to catch up, or that their behavior doesn’t matter because they’re powerless compared to where they should be. It usually comes with a bit of self-righteousness, as well: “I’m suffering (by only taking a $250k salary) for my belief in this company.”The false-poverty behavior is common in startup executives, because (as I already discussed) they’d much rather be elsewhere– executives in much larger companies, or in private equity.

I am neither proud of nor sorry for any of these lapses, because ultimately it’s capitalism’s responsibility to make me produce for it, and within the scope of my career, capitalism failed. I don’t pity the ownership of any of my former employers for not having been able to squeeze more value out of me, because that’s on them.

I have nothing to say other than that I loved this. Ultimately, corporate capitalism fails to be properly capitalistic because of its command-economy emphasis on subordination. When people are treated as subordinates, they slack and fade. This hurts the capitalist more than anyone else.

Answering the question

I provided commentary on Tim Chevalier’s post because not only is he taking on the tech industry, but he’s giving proof to his objection by leaving it. Tech has a broken culture, but it’s not enough to issue vague complaints as many do. It’s not just about sexism and classism and Agile and Java Shop politics in isolation. It’s about all of that shit, taken together. It’s about the fact that we have a shitty hand-me-down culture from those who failed out of the business mainstream (“MBA Culture”) and ended up acquiring its worst traits (e.g. sexism, ageism, anti-intellectualism). It’s about the fact that we have this incredible skill in being able to program, and yet 99 percent of our work is reduced to a total fucking joke because the wrong people are in charge. If we care about the future at all, we have to fight this.

Fixing one problem in isolation, I’ll note, will do no good. This is why I can’t stand that “lean in” nonsense that is sold to unimaginative women who want some corporate executive to solve their problems. You cannot defeat the systemic problems that disproportionately harm women, and maintain the status quo at the same time. You can’t take an unfair, abusive system designed to concentrate power and “fix” it so that it is more fair in one specific way, but otherwise operates under the same rules. You can’t have a world where it is career suicide to take a year off of work for any reason except to have a baby. If you maintain that cosmetic obsession with recency, you will hurt women who wish to have children. You have to pick: either accept the sexism and ageism and anti-intellectualism and the crushing mediocrity of what is produced… or overthrow the status quo and change a bunch of things at the same time. I know which one I would vote for.

Technology is special in two ways, and both of these are good news, at least insofar as they bear on what is possible if we get our act together. The first is that it’s flamingly obvious that the wrong people are calling the shots. Look at many of the established tech giants. In spite of having some of the best software engineers in the world, many of these places use stack ranking. Why? They have an attitude that software engineering is “smart people work” and that everything else– product management, people management, HR– is “stupid people work” and this becomes a self-fulfilling prophecy. You get some of the best C++ engineers in the world, but you get stupid shit like stack ranking and “OKRs” and “the 18-month rule” from your management.

It would be a worse situation to have important shots called by idiots and not have sufficient talent within our ranks to replace them. But we do have it. We can push them aside, and take back our industry, if we learn how to work together rather than against each other.

The second thing to observe about technology is that it’s so powerful as to admit a high degree of mismanagement. If we were a low-margin business, Scrum would kill rather than merely retard companies. Put simply, successful applications of technology generate more wealth than anyone knows what to do with. This could be disbursed to employees, but that’s rare: for most people in startups, their equity slices are a sad joke. Some of it will be remitted to investors and to management. A great deal of that surplus, however, is spent on management slack: tolerating mismanagement at levels that would be untenable in an industry with a lower margin. For example, stack-ranking fell out of favor after it caused the calamitous meltdown of Enron, and “Agile”/”Scrum” is a resurrection of Taylorist pseudo-science that was debunked decades ago. Management approaches that don’t work, as their proponents desperately scramble for a place to park them, end up in tech. This leaves our industry, as a whole, running below quarter speed and still profitable. Just fucking imagine how much there would be to go around, if the right people were calling the shots.

In light of the untapped economic potential that would accrue to the world if the tech industry were better run, and had a better culture, it seems obvious that technology can fix the culture. That said, it won’t be easy. We’ve been under colonial rule (by the business mainstream) for a long time. Fixing this game, and eradicating the bad behaviors that we’ve inherited from our colonizing culture (which is more sexist, anti-progressive, anti-intellectual, classist and ageist than any of our natural tendencies) will not happen overnight. We’ve let ourselves be defined, from above, as arrogant and socially inept and narcissistic, and therefore incapable of running our own affairs. That, however, doesn’t reflect what we really are, nor what we can be.

The Sturgeon Filter: the cognitive mismatch between technologists and executives

There’s a rather negative saying, originally applied to science fiction, known as Sturgeon’s Law: “ninety percent of everything is crap“. Quantified so generally, I don’t think that it’s true or valuable. There are plenty of places where reliability can be achieved and things “just work”. If ninety percent of computers malfunctioned, the manufacturer would be out of business, so I don’t intend to apply the statement to everything. Still, there’s enough truth in the saying that people keep using it, even applying it far beyond what Theodore Sturgeon actually intended. How far is it true? And what does it mean for us in our working lives?

Let’s agree to take “ninety percent” to be a colloquial representation of “most, and it’s not close”; typically, between 75 and 99 percent. What about “is crap”? Is it fair to say that most creative works are crap? I wouldn’t even know where to begin on that one. Certainly, I only deign to publish about a quarter of the blog posts that I write, and I think that that’s a typical ratio for a writer, because I know far too well how often an appealing idea fails when taken into the real world. I think that most of the blog posts that I actually release are good, but a fair chunk of my writing is crap that, so long as I’m good at self-criticism, will never see daylight.

I can quote a Sturgeon-like principle with more confidence, in such a way that preserves its essence but is hard to debate: the vast majority (90 percent? more?) of mutations are of negative value and, if implemented, will be harmful. This concept of “mutation” covers new creative work as well as maintenance and refinement. To refine something is to mutate it, while new creation is still a mutation of the world in which it lives. And I think that my observation is true: a few mutations are great, but most are harmful or, at least, add complexity and disorder (entropy). In any novel or essay, changing a word at random will probably degrade its quality. Most “house variants” of popular games are not as playable as the original game, or are not justified by the increased complexity load. To mutate is, usually, to inflict damage. Two things save us and allow progress. One is that the beneficial mutations often pay for the failures, allowing macroscopic (if uneven) progress. The second is that we can often audit mutations and reverse a good number of those that turn out to be bad. Version control, for programmers, enables us to roll back mutations that are proven to be undesirable.

The Sturgeon Mismatch

Programmers experience the negative effects of random mutations all the time. We call them “bugs”, and they range from mild embarrassments to catastrophic failures, but very rarely is a discrepancy between what the programmer expects of the program, and what it actually does, desirable. Of course, intended mutations have a better success rate than truly “random” ones would, but even in those, there is a level of ambition at which the likelihood of degradation is high. I know very little about the Linux kernel and if I tried to hack it, my first commits would probably be rejected, and that’s a good thing. It’s only the ability to self-audit that allows the individual programmer, on average, to improve the world while mutating it. It can also help to have unit tests and, if available for the language, a compiler and a strong type system; those are a way to automate at least some of this self-censoring.

I’m a reasonably experienced programmer at this point, and I’m a good one, and I still generate phenomenally stupid bugs. Who doesn’t? Almost all bugs are stupid– tiny, random injections of entropy emerging from human error– which is why the claim (for example) that “static typing only catches ‘stupid’ bugs” is infuriating. What makes me a good programmer is that I know what tools and processes to use in order to catch them, and this allows me to take on ambitious projects with a high degree of confidence in the code I’ll be able to write. I still generate bugs and, occasionally, I’ll even come up with a bad idea. I’m also very good at catching myself and fixing mistakes quickly. I’m going to call this selective self-censoring that prevents 90 percent of one’s output from being crap the Sturgeon Filter.

With a strong Sturgeon Filter, you can export the good mutations and bury the bad ones. This is how reliability (either in an artistic masterpiece, orin  a correct, efficient program) can be achieved by unreliable creatures such as humans. I’d further argue that to be a competent programmer requires a strong Sturgeon Filter. The good news is that this filter is built up fairly quickly by tools that give objective feedback: compilers and computers that follow instructions literally, and malfunction at the slightest mistake. As programmers, we’re used to having our subordinates (compilers) tell us, “Fix your shit or I’m not doing anything.”

It’s no secret that most programmers dislike management, and have a generally negative view of the executives and “product managers” running most of the companies that employ them. This is because programmers pride themselves on having almost impermeable Sturgeon Filters, while lifelong managers have nonexistent Sturgeon Filters. They simply don’t get the direct, immediate feedback that would train them to recognize and reject their own bad ideas. That’s not because they’re stupider than we are. I don’t actually think that they are. I think that their jobs never build up the sense of fallibility that programmers know well.

Our subordinates, when given nonsensical instructions, give blunt, tactless feedback– and half the time they’re just pointing out spelling errors that any human would just ignore! Managers’ subordinates, however, are constantly telling them that they’re awesome, and will often silently clean up their mistakes. Carry this difference in experience out over 20 years or more, and you get different cultures and different attitudes. You get 45-year-old programmers who, while extraordinarily skillful, are often deeply convinced of their own fallibility; and you get 45-year-old executives who’ve never really failed or suffered at work, because even when they were bad at their jobs, they had armies of people ready to manage their images and ensure that, even in the worst case scenario where they lost jobs, they’d “fail up” into a senior position in another company.

Both sides now

Programmers and managers both mutate things; it’s the job. Programmers extend and alter the functionality of machines, while managers change the way people work. In both cases, the effects of a random mutation, or even an unwise intended one, are negative. Mutation for its own sake is undesirable.

For example, scheduling a meeting without a purpose is going to waste time and hurt morale. Hiring bad people and firing good ones will have massive repercussions. To manage at random (i.e. without a Sturgeon Filter) is almost as bad as to program at random. Only a small percentage of the changes to the way that people work that managers propose are actually beneficial. Most status pings or meetings serve no value except to allay the creeping sense of the manager that he isn’t “doing enough”, most processes that exist for executive benefit or “visibility” are harmful, and a good 90 to 99 percent of the time, the people doing the work have better ideas about how they should do it than the executives shouting orders. Managers, in most companies, interrupt and meddle on a daily basis, and it’s usually to the detriment of the work being produced. Jason Fried covers this in this talk, “Why work doesn’t happen at work”. As he says, “the real problems are … the M&Ms: the managers and the meetings”. Managers are often the last people to recognize the virtue of laziness: that constantly working (i.e. telling people what to do) is a sign of distress, while having little to do generally means that they’re doing their jobs well.

In the past, there was a Sturgeon Filter imposed by time and benign noncompliance. Managers gave bad orders just as often as they do now, but there was a garbage-collection mechanism in place. People followed the important orders, which were usually congruent already with common sense and basic safety, but when they were given useless orders or pointless rules to follow, they’d make a show of following the new rules for a month or two, then discard them when they failed to show any benefit or improvement. Many managers, I would imagine, preferred this, because it allowed them to have the failed change silently rejected without having any attention drawn to their mistake. In fact, a common mode of sub-strike resistance used in by organized labor is “the rule-follow“, a variety of slowdown in which rules were followed to the letter, resulting in low productivity. Discarding senseless rules (while following the important, safety-critical ones) is a necessary behavior of everyone who works in an organization; a person who interprets all orders literally is likely to perform at an unacceptably low level.

In the past, the passage of time lent plausible deniability to a person choosing to toss out silly policies that would quite possibly be forgotten or regretted by the person who issued them. An employee could defensibly say that he followed the rule for three months, realized that it wasn’t helping anything and that no one seemed to care, and eventually just forgot about it or, better yet, interpreted a new order to supersede the old one. This also imposed a check on managers, who’d embarrass themselves by enforcing a stupid rule. Since no one has precise recall of a months-old conversation of low general importance, the mists of time imposed a Sturgeon Filter on errant management. Stupid rules faded and important ones (like, “Don’t take a nap in the baler”) remained.

One negative side effect of technology is that it has removed that Sturgeon Filter from existence. Too much is put in writing, and persists forever, and the plausible deniability of a worker who (in the interest of getting more done, not in slacking) disobeys it has been reduced substantially. In the past, an intrepid worker could protest a status meeting by “forgetting” to attend it on occasion, or by claiming he’d heard “a murmur” that it was discontinued, or even (if he really wanted to make a point) by taking colleagues out for lunch at a spot not known for speedy service and, thus, an impersonal force just happening to half the team late for it. While few workers actually did such things on a regular basis (to make it obvious would get a person just as fired then as today) the fact that they might do so imposed a certain back-pressure on runaway management that doesn’t exist anymore. In 2015, there’s no excuse for missing a meeting when “it’s on your fucking Outlook calendar!”

Technology and persistence have evolved, but management hasn’t. Programmers have looked at their job of “messing with” (or, to use the word above, mutating) computers and software systems and spent 70 years coming up with new ways to compensate for the unreliable nature that comes from our being humans. Consequently, we can build systems that are extremely robust in spite of having been fueled by an unreliable input (human effort). We’ve changed the computers, the types of code that we can write, and the tools we use to do it. Management, on the other hand, is still the same game that it always has been. Many scenes from Mad Men could be set in a 2010s tech company, and the scripts would still fit. The only major change would be in the costumes.

To see the effects of runaway management, combined with the persistence allowed by technology, look no further than the Augean pile of shit that has been given the name of “Agile” or “Scrum”. These are neo-Taylorist ideas that most of industry has rejected, repackaged using macho athletic terminology (“Scrum” is a rugby term). Somehow, these discarded, awful ideas find homes in software engineering. This is a recurring theme. Welch-style stack ranking turned out to be a disaster, as finally proven by its thorough destruction of Enron, but it lives on in the technology: Microsoft used it until recently, while Google and Amazon still do. Why is this? What has made technology such an elephant graveyard for disproven management theories and bad ideas in general?

A squandered surplus

The answer is, first, a bit of good news: technology is very powerful. It’s so powerful that it generates a massive surplus, and the work is often engaging enough that the people doing it fail to capture most of the value they produce, because they’re more interested in doing the work than getting the largest possible share of the reward. Because so much value is generated, they’re able to have an upper-middle-class income– and upper-working-class social status– in spite of their shockingly low value-capture ratio.

There used to be an honorable, progressive reason why programmers and scientists had “only” upper-middle-class incomes: the surplus was being reinvested into further research. Unfortunately, that’s no longer the case: short-term thinking, a culture of aggressive self-interest, and mindless cost-cutting have been the norm since the Reagan Era. At this point, the surplus accrues to a tiny set of well-connected people, mostly in the Bay Area: venture capitalists and serial tech executives paying themselves massive salaries that come out of other peoples’ hard work. However, a great deal of this surplus is spent not on executive-level (and investor-level) pay but into another, related, sink: executive slack. Simply put, the industry tolerates a great deal of mismanagement simply because it can do so and still be profitable. That’s where “Agile” and Scrum come from. Technology companies don’t succeed because of that heaping pile of horseshit, but in spite of it. It takes about five years for Scrum to kill a tech company, whereas in a low-margin business it would kill the thing almost instantly.

Where this all goes

Programmers and executives are fundamentally different in how they see the world, and the difference in Sturgeon Filters is key to understanding why it is so. People who are never told that they are wrong will begin to believe that they’re never wrong. People who are constantly told that they’re wrong (because they made objective errors in a difficult formal language) and forced to keep working until they get it right, on the other hand, gain an appreciation for their own fallibility. This results in a cultural clash from two sets of people who could not be more different.

To be a programmer in business is painful because of this mismatch: your subordinates live in a world of formal logic and deterministic computation, and your superiors live in the human world, which is one of psychological manipulation, emotion, and social-proof arbitrage. I’ve often noted that programming interviews are tough not because of the technical questions, but because there is often a mix of technical questions and “fit: questions in them, and while either category is not terribly hard on its own, the combination can be deadly. Technical questions are often about getting the right answer: the objective truth. For a contrast, “fit” questions like “What would you do if given a deadline that you found unreasonable?” demand a plausible and socially attractive lie. (“I would be a team player.”) Getting the right answer is an important skill, and telling a socially convenient lie is also an important skill… but having to context-switch between them at a moment’s notice is, for anyone, going to be difficult.

In the long term, however, this cultural divergence seems almost destined to subordinate software engineers, inappropriately, to business people. A good software engineer is aware of all the ways in which he might be wrong, whereas being an executive is all about being so thoroughly convinced that one is right that others cannot even conceive of disagreement– the “reality distortion field”. The former job requires building an airtight Sturgeon Filter so that crap almost never gets through; the latter mandates tearing down one’s Sturgeon Filter and proclaiming loudly that one’s own crap doesn’t stink.

Abandon “sorta high”/”P1″ priority.

It’s fairly common, in technology organizations, for there to be an elaborate hierarchy for prioritizing bugs and features, ranging from “P0″ (severely important) to “P4″ (not at all important) and for work items to be labelled as such. Of those five levels, the typical meanings are something like this:

  • P0: severely high priority. This gives management and, ideally, the team, the right to be pushy in getting the issue resolved.
  • P1: high priority. This is often an indecisive mid-level between P0 and P2.
  • P2: default (average) priority. This is the classification that most features and bugs are supposed to be given.
  • P3: low priority. Unlikely to be done.
  • P4: very low priority. The black hole.

I would argue for getting rid of three of those levels: P1, P3, and P4. I don’t think that they do much good, and I think that they can actually be confusing and harmful.

Against “P3″ and “P4″

P3 and P4, in most companies, are effectively “will not fix” labels. It would be an embarrassment for an engineer to work on a P3 or P4 issue, because the corporate fiction is that every worker will always work on the highest-priority task (to the company, of course). Ergo, labeling a bug or feature below the default priority means that anyone who performs it is signaling that he or she has no P2-or-higher work to do. That’s not a good look. Even it’s the case (of having no P2 or higher work) one does better by finding work that serves one’s own career goals than by completing low-priority work in the backlog The only way it becomes socially acceptable to complete a P3 or P4 item is if it happens incidentally in the process of achieving something else.

If the issue, despite being labelled as low in priority, becomes an irritation to the developer, it’s often best to silently fix it and then close the bug later saying, “heh, I already did this, 6 months ago.” To do that is to feign not having been aware of the item’s low given priority, and to appear so productive and prescient as to have silently fixed bugs or completed work without even knowing that others noticed them.

What’s wrong with P1?

The case for getting rid of “P1″ is that it’s fundamentally indecisive. As far as I’m concerned, there are two meaningful levels for a task: Default and Escalated. “Default” is exactly what it sounds like: the typical, average level for work items. Except in a crisis, the majority of feature requests and bug reports will be at this level. At the Default level, it’s the kind of work that can be done without informing or tipping off management: you just go and do the job.

The Escalated level is one that requires the double-edged sword of managerial power. The workers need management to get involved and tell other teams to do things for them, pronto. It means that management has to suspend the right of the workers (which, during normal conditions, they ought to have) to prioritize work as they see fit. Toes need to get stepped on, in other words, or the work can’t proceed in a timely manner (or, in many cases, at all). It goes without saying that this shouldn’t occur for unimportant or even default-priority work, because the manager expends political capital in interrupting people (often, people who report to someone else) and asking them to work on things other than what they’d prefer. Since people on other teams are going to take hits, and the manager is going to spend political capital, it’s only fair that the employees who decided to escalate the work shall also prioritize the task above other things. This means that if they face deadlines and status pings and a support burden, they did at least ask for it by ringing the alarm bell.

In essence, the P0/Escalated level tips off management that the work item is very important, and pushes the manager to interrupt other teams and make demands of people elsewhere in the company, in order to have the task completed quickly. It’s a fair trade: information (that may be used against the worker or team, in the form of deadlines, micromanagement, follow-on work, meetings and future questioning) is given to management, but the manager is expected to pound on doors, get resources, and use his or her political power to eradicate bureaucratic hurdles (“blockers”). Setting a task at P2/Default level, on the other hand, conveys almost no information to management, but also asks for no political favor from the boss. It is, likewise, a relatively fair deal.

The indecisive mid-level of “P1″ combines the worst of both worlds. It tips off management, enough to get them interested and concerned, but doesn’t implore them to take immediate action and do whatever they can to fix the problem. It’s giving information without demanding (perhaps requesting, but that is different) something in return. It’s good to share information with managers, when you believe they’ll use it to your benefit and especially when you need a favor from one, but it’s rarely wise to take the risk of calling in a manager (or, Xenu forbid, an executive) if neither is the case. To do that is just to add noise to the whole process: it causes anxiety for the manager and chaos for the workers. The manager might not interrupt or change one’s way of working, but it’s a risk, and when you don’t need that manager to interrupt or change other people, why take it?

P0 says, “I’m willing to take your direction and commit to fixing the problem, but you need to trust me when I say I need something.” P2 says “things are under control. You can leave me alone and I’ll keep working on useful stuff.” Both are reasonable statements to take to management. P1 says “this is important enough to be watch-worthy but not that important.” With that, you’re just asking to be micromanaged.

What I am not saying, and what I am

In the above, it might sound like I’m taking an adversarial approach to managers, and that I advocate withholding information. That’s not what I intend to imply. “P0″ and “P2″ are artifacts of formal, written communication in a corporate context. In such a theater, less said is usually better. To put something in writing is to give the other person more power than to say it. To put something in writing in a corporate context is, often, to give power to a large class of people, because that information will be available to many.

If you have a good manager and a strong working relationship, you can verbally convey nuances like “I marked this as P2/Default but I consider it really important” or “here’s what I intend to prioritize and why”. There are good managers as well as bad ones, and it’s typically not the best strategy to withhold all information from them. That said, the more persistent and widely read a piece of communication will be, the more one should tend to hold back. Even if the information is helpful to the people receiving it, distributing it in such a way often shows a lack of care or self-preservation.

The information itself is often less important than the semiotics of furnishing it. To give information without expecting anything in return means something. It does not always lower one’s status or power; in fact, it can raise it. However, to give information formally, and in a context where the person is free not to read it and therefore cannot be expected to act on it, is generally a submissive move. To give information that expects little commitment from the other person but offers one’s own commitment (labeling a task “P1″ is implicitly promising to prioritize it) is quite clearly a subordinate action. And while it is reckless and often harmful (i.e. it will often get you fired) to be insubordinate (as in, disobeying a direct order) to management, it is likewise detrimental to be eagerly subordinate. The best strategy is to take direction and show loyalty, but to present oneself as a social equal. Toward this aim, “I won’t burden you with information unless it’s mutually beneficial for both of us” is the right approach to take to one’s boss. Over-volunteering of information is to lower oneself.

… and, while we’re at it, fuck Scrum.

I didn’t intend to go here with this post, but I’ve written before on the violent transparency— open-plan offices that leave the engineer visible from behind, aggressive visibility into day-to-day fluctuations of individual productivity– that is often inflicted upon people in the software industry. It’s actually absurd– much like the image of an Austrian duke wearing diapers and being paddled by a toothless, redneck dominatrix– that people can be paid six-figure salaries but still have to justify weeks and days of their own working time under the guise of “Agile”. It shows that software engineers are politically inept, over-eager to share information with management without any degree of self-auditing, and probably doomed to hold low status unless this behavior changes. So fuck Scrum and fuck “backlog grooming” and fuck business-driven engineering and fuck “retrospective planning meetings” and fuck any “whip us all harder, master” engineer who advocates for it to be imposed on his own colleagues.

… but back to the main point …

Prioritization is power, and so is information. Management has the final right to set priorities, but relies on from-the-bottom information about how to use that power. It’s probably true that all human organizations converge on oligarchy; democracies see a condensation of power as blocs and coalitions form and people acquire disproportionate influence within and over them, but dictators require information and must promote lieutenants who’ll furnish it. There are many ways to look at this, some optimistic and others dismal, but from a practical standpoint, it’s probably good news for a savvy underling in a notionally dictatorial organization like a corporation. It means that CEOs and bosses, while they have power, don’t have as much as on paper. They still have a need from below, and the most critical need isn’t the work furnished, because work itself is usually pretty easy to commoditize. It’s information that they require.

I am certainly not saying that information should be withheld from management, as if it were some general principle. I’m saying that it shouldn’t be furnished unless there is some tangible benefit in doing so. To furnish information without benefit is often harmful to the person to whom it’s given: it can cause anxiety and overload. To furnish information without personal benefit (although that personal benefit may be slight, even just the satisfaction of having done the right thing) will either confuse a person (and a confused manager often becomes a nightmarish micromanager) or show weakness.

Moreover, there’s a difference between personal, “human to human” communication that is often informal, and communication in the context of a power relationship. Many software engineers miss this difference, and it’s why they’re okay with a regime forcing them to put minutiae of their work lives on “Scrum boards” that everyone can see. If you get along with your boss, it’s OK to give up information in a human-to-human, as-equals way. At lunch, you can discuss ideas and relay which work items you actually think are most important. That’s one way that trust is built. If doing so will make you more trusted and valued, and if it doesn’t hurt anyone to do so; then, by all means, give information to that manager or executive. However, when you’re relating formally, in any persistent medium, the best thing to do is keep communications to “I need X from you because Y”. This applies especially to bug databases, work trackers, and email. Peppering a manager with extraneous information is not only going to identify oneself as a subordinate, but to add anxiety and extend the invitation to micromanage you.

When giving information to a person higher in the corporate ranks, it’s important to do it properly. One must communicate, “Hey man, I’m a player too”. That requires making communication clear, decisive, short and directed to an obvious mutual benefit. (It’s unwise to try to hide one’s own personal benefit; you lose little in disclosing it and, even if your request is denied, you’re more likely to be trusted in the future for doing so.) Communication should also be sparse: clear and loud when needed, nonexistent when unnecessary. One may choose to ring an alarm bell, demand response, and accept that some of the managerially-imposed inconvenience is likely to fall on oneself: that’s the P0/Escalated priority.There is also the option not to ring the alarm bell, but just to perform the work: that’s the P2/Default priority. Taking the middle and giving that bell a half-hearted ring is just silliness.

Sand Hill Road announces Diversity Outreach Program

I’m pleased to announce that I’ve succeeded in coordinating a number of Sand Hill Road’s most prestigious venture capital firms, including First Round Capital, Sequoia, and Kleiner Perkins, to form the first-ever Venture Capital Diversity Outreach Program. I could not have done this alone (obviously) and I want to thank all of my bros (and fembros) in Menlo Park and Palo Alto for making this happen.

In response to negative press surrounding this storied industry and its supposed culture of sexism (which we deny) we held a round-table discussion last weekend on Lake Tahoe, on the topic of rehabilitate our industry’s image. We’re hurt by the perception that we have a sexist, exclusionary, “frat boy” culture, so we decided to form a program to prove that we aren’t sexist. It was easy to agree on the first step: start funding and promoting women.

This idea, though brilliant and revolutionary, raised a difficult question: which women? Based on our back-of-the-envelope calculations, we estimated that there are between 3 and 4 billion women in the world! We had to narrow the pool. One venture capitalist who, unfortunately, declined to be named, said, “we need to fund young women.” Echoing Mark Zuckerberg, he said, “Young people are just smarter.” And so it was agreed that we will be funding 25 women under 23. Each will receive $25,000 worth of seed-round funding in exchange for a mere 15% of the business, along with unlimited one-on-one mentorship opportunities from the Valley’s leading venture capitalists.

We’re extending this opportunity outside of Northern California. In fact, you can apply from anywhere in the world. All pitches must be in video form, each lasting no more than 4 minutes. Which VC you should submit your pitch to depends on where you are applying from. Submissions from Eastern Europe will go to one VC for appraisal, Latin American submissions will go to another, and submissions from Asia to another. We have to match the judges with their specialties, you see. Don’t worry. We’ll have this all sorted out by this evening. Full-body pitch videos only. Face-only submissions will be rejected.

Based on the all-important and objective metric of cultural fit, the submissions will be stack-ranked and the winners will be notified within three days. We recommend that the winners, upon receiving funding, drop out of college to pursue this program. College may be necessary for middle-class people who want to become dentists, and it’s good for propagating Marxist mumbo-jumbo, but it’s so unnecessary if you have all of Sand Hill Road on your side. Which you will, if you’re a woman who wins this contest. Until you’re 28 or so, but that’s forever away and you’ll be a billionaire by then. We find that, in the magical land of California, it’s best not to think about the future or about risk. Future’s so bright, you gotta wear shades!

On a side note, being a venture capitalist is freaking awesome. No, the job doesn’t involve snorting blow off a hooker’s breasts– that actually stopped about 10 years ago, some HR thing. But nothing quite beats the thrill of playing football, in the midst of adiabatic para-skiing, while playing Halo on Google Glass!

Keep the Faith, Don’t Hate, and, above all, Lipra Solof.

Never Invent Here: the even-worse sibling of “Not Invented Here”

“Not Invented Here”, or “NIH syndrome”, refers to the tendency of organizations to undervalue external or third-party technical assets, even if they are free and easily available, when it is taken to an illogical extreme. The NIH archetype is the enterprise architect who throws person-decade after person-decade into reinventing solutions that exist elsewhere, maintaining this divergent “walled garden” of technology that has no future except by executive force. No doubt, that’s bad. I’m sure that it exists in rich, insular organizations, but I almost never see it in organizations with under a thousand employees. Too often in software, however, I see the opposite extreme: a mentality that I call “Never Invent Here” (NeIH). With that mentality, external assets are overvalued and often implicitly trusted, leaving engineers to spend more time adapting to the quirks of off-the-shelf assets, and less time building assets of their own.

Often, the never-invent-here mentality is couched in other terms, such as business-driven engineering or “Agile” software production. Let’s be honest about this faddish “Agile” nonsense: if engineers are micromanaged to the point of having to justify weeks or even days of their own working time, not a damn thing is ever going to be invented, because no engineer can afford to take the risk; they’re mired in user stories and backlog grooming. The core attitude underlying “Agile” and NeIH is that anything that takes more than some insultingly small amount of time (say, 2 weeks) to build should not be trusted to in-house employees. Rather than building technical assets, programmers spend most of their time in the purgatory of evaluating assets with throwaway benchmarking code and in writing “glue code” to make those third-party assets work together. The rewarding part of the programmer’s job is written off as “too hard”, while programmers are held responsible for the less rewarding part of the job: gluing the pieces together in order to meet parochial business requirements. Under such a regime, there is little room for progress or development of skills, since engineers are often left to deal with the quirks of unproven “bleeding edge” technologies rather than either (a) studying the work of the masters, or (b) building their own larger works and having a chance to learn from their own mistakes.

Never-invented-here engineering can be either positive or negative for an engineer’s career, depending on where she wants to go, but I tend to view its effects as negative for more senior talent. To the good, it assists in buzzword bingo. She can add Spring and Hibernate and Maven and Lucene to her CV, and other employers will recognize those technologies by name, and that might help her get in the door. To the bad, it makes it hard for engineers to progress beyond the feature-level stage, because meatier projects just aren’t done in most organizations when it’s seen as tenable for non-coding architects and managers to pull down off-the-shelf solutions and expect the engineers to “make the thingy work with the other thingy”.

Software engineers don’t mind writing some glue code, because even the best jobs involve grunt work, but no one wants to be stuck doing only that. While professional managers often ignore the fact, engineers can be just as ambitious as they are; the difference is that their ambition is focused on project scope and impact rather than organizational ascent or number of people managed. Entry-level engineers are satisfied to fix bugs and add small features– for a year or two. Around 2 years in, they want to be working on (and suggesting) major features and moving to the project level. At 5 years, they’re ready for bigger projects, initiatives, infrastructure, and to lead multi-engineer projects. And so on. Non-technical managers may ignore this, preferring to institute the permanent juniority of “Agile”, but they do so at their peril.

One place where this is especially heinous is in corporate “data science”. It seems like 90 percent (possibly more) of professional “data scientists” aren’t really being asked to develop or implement new algorithms, but are stuck in a role that has them answering short-term business needs, banging together off-the-shelf software, and getting mired in operations rather than fundamental research. Of course, if that’s all that a company really needs, then it probably doesn’t make sense for it to invest in the more interesting stuff, and in that case… it probably doesn’t need a true data scientist. I don’t intend to say that data cleaning and glue code are “bad” because they’re a necessary part of every job. They don’t require a machine learning expert, is all.

People ask me why I dislike the Java culture, and I’ve written much about that, but I think that one of Java’s worst features is that it enables the never-invent-here attitude of the exact type of risk-averse businessman who makes the typical corporate programmer’s job so goddamn depressing. In Java, there’s arguably a solution out there that sorta-kinda matches any business problem. Not all the libraries are good, but there are a lot of them. Some of those Java solutions are work very well, others do not, and it’s hard to know the difference (except through experience) because the language is so verbose and the code quality so low (in general; again, this is cultural rather than intrinsic to the language) that to actually read it is a non-starter. Even in the case where an engineer actually wanted to read the code and figure out what was actually going on, the business would never budget the time. Still, off-the-shelf solutions are trusted implicitly until they fail (either breaking, or being ill-suited to the needs of the business). Usually, that doesn’t happen for quite a while, because most off-the-shelf, open-source solutions are of decent quality when it comes to common problems, and far better than what would be written under the timelines demanded by most businesses, even in “technology” companies. The problem is the fact that, a year or two down the road, those off-the-shelf products often aren’t enough to meet every need. What happens then?

I wrote an essay last year entitled, “If you stop promoting from within, soon you can’t.” Companies tend to have a default mode of promotion. Some promote from within, and others tend to hire externally for the top jobs, and people tend to figure out which mode is in play within a year or so. In technology, the latter is more common for three reasons. One is the cultural prominence of venture capital. VCs often inject their buddies, regardless of merit, at high levels in companies they fund, regardless of whether the founders want them there. Second is the rapid scramble for headcount accumulation that exists in, and around, the VC-funded world. This requires companies to sell themselves very hard to new hires, which means that the best jobs and projects are often used to entice new people into joining rather than handed down to those already on board. The third is the tendency of software to be extremely political, because for all of our beliefs about “meritocracy”, the truth is that an individual’s performance is extremely context-dependent and we, as programmers, tend to spend a lot of time arguing for technologies and practices that’ll put us, individually, high in the rankings. Even if they are the same in terms of skill and natural ability, a team of programmers will usually have one “blazer” and N-1 who keep up with the blazer’s changes, and no self-respecting programmer is going to let himself be in the “keep-up-with” category for longer than a month. At any rate, once a company develops the internal reputation of not promoting internally, it starts to lose its best people. Soon, it reaches a point where it has to hire externally for the best jobs, because everyone who would have been qualified is already gone, pushed out by the lack of advancement. While many programmers don’t seek promotion in terms of ascent in a management hierarchy, they do want to work on bigger and more interesting projects with time. In a never-invent-here culture that just expects programmers to work on “user stories”, the programmers who are capable of more are often the first ones to leave.

Thus, if most of what a company has been doing has been glue code and engineers are not trusted to run whole projects, then by the time the company’s needs have out-scaled the off-the-shelf product, the talent level will have fallen to the point that it cannot resolve the situation in-house. It will either have to find “scaling experts” at a rate of $400 per hour to solve future problems, or live with declining software quality and functionality.

Of course, I am not saying, “don’t use off-the-shelf software”. In fact, I’d say that while programmers ought to be able to spend the majority of their time writing assets instead of adapting to pre-existing ones, it is still very often best to use an existing solution if one will suffice. Unless you’re going to be a database company, you shouldn’t be rolling your own alternative to Postgres; you should use what is already there. I’d make a similar argument with programming languages: there are enough good ones already in existence that expecting employees to contend with an in-house programming language, that probably won’t be very good, is a bad idea. In general, something that is necessary but outside the core competency of the engineers should be found externally, if possible. If you’re a one-product company that needs minimal search, there are great off-the-shelf products that will deliver that. On the other hand, if you’re calling your statistically-literate engineers “data scientists” and they want to write some machine learning algorithms instead of trying to make Mahout work for their problem, you should let them.

With core infrastructure (e.g. Unix, C, Haskell) I’d agree that it’s best to use existing, high-quality solutions. I also support going off-the-shelf with the relatively small problems: e.g. a CSV parser. If there’s a bug-free CSV parser out there, there’s no good reason to write one in-house. The mid-range is where off-the-shelf solutions are often inferior– and, often, in subtle ways (such as tying a large piece of software architecture to the JVM, or requiring expensive computation to deal with a wonky binary protocol)– to competently-written in-house solutions. Why is this? For the deep, core infrastructure there is a wealth of standards that already exists, and there are high-quality implementations to meet them. Competing against existing assets is probably a wasted effort. On the other hand, for the small problems like CSV parsing, there isn’t much meaningful variability in what a user can want. Typically, the programmer just wants the problem to be solved so she can forget about it. The mid-range of problem size is tricky, though, because there’s enough complexity that off-the-shelf solutions aren’t likely to deliver everything one wants, but not quite enough demand for solutions for nearly-unassailable standard implementations to exist in the open-source world. Let’s take linear regression. This might seem like a simple problem, but there are a lot of variables and complexities, such as: handling of large categorical variables, handling of missing data, regularization, highly-correlated inputs, optimization methods, whether to use early stopping, basis expansions, and choice of loss function. For a linear regression problem in 10,000 dimensions with 1 million data points, standards don’t exist yet. This problem isn’t a core infrastructural problem like building an operating system, but it’s hard enough that off-the-shelf solutions can’t be blindly relied upon to work.

This “mid-range” of problem is where programmers are expected to establish themselves, and it’s often where there’s a lot of pressure to use third-party products, regardless of whether they’re appropriate to the job. At this level, there’s enough variability in expectations and problem type that beating an off-the-shelf solution into conforming to the business need is just as hard as writing it from scratch, but the field isn’t so established that standards exist and the problem is considered “fully solved” (or close to it) already. Of course, off-the-shelf software should be used on mid-range problems if (a) it’s likely to be good enough, (b) those problems are uncorrelated to the work that the software engineers are trying to do and would be perceived as a distraction, and (c) the software can be used without architectural compromise (i.e. rewriting code in Java).

The failure, I would say, isn’t that technology companies use off-the-shelf solutions for most problems, because that is quite often the right decision. It’s that, in many technologies, that’s all that they use, because core infrastructure and R&D don’t fit into the two-week “sprints” that the moronic “Agile” fad demands that engineers accommodate, and therefore can’t be done in-house at most companies. The culture of trust in engineers is not there, and that (not the question of whether one technology is used over another) is the crime. Moreover, this often means that programmers spend more time overcoming the mismatch between existing assets and the problems that they need to solve than they spend in building new assets from scratch (which is what we’re trained, and built, to do). In the long term, this leads the engineer to the atrophy of skills, lowers her level of satisfaction with her job, and can damage her career (unless she can move into management). For a company, this spells attrition and permanent loss of capability.

The never-invent-here attitude is stylish because it seems to oppose the wastefulness and lethargy of the old “not-invented-here” corporate regime, while simultaneously reaffirming the fast-and-sloppy values of the new one, doped with venture capital and private equity. It benefits “product people” and non-technical makers of unrealistic promises (to upper management, clients, or investors) while accruing technical debt and turning programmers into a class of underutilized API Jockeys. It is, to some extent, a reaction against the “not invented here” world of yesteryear, in which engineers (at least, by stereotype) toiled on unnecessary custom assets without a care about the company’s more immediate needs. I would also say that it’s worse.

Why is the “never invent here” (NeIH) mentality worse than “not invented here” (NIH)? Both are undesirable, clearly. NIH, taken to the extreme, can become a waste of resources. That said, it is at least a “waste” that keeps the programmers’ skills sharp. On the other hand, NeIH can be just as wasteful of resources, as programmers contend with the quirks and bugs of software assets that they must find externally, because their businesses (being short-sighted and talent-hostile) do not trust them to build such things. It also has long-term negative effects on morale, talent level, and the general integrity of the programming job. My guess is that the “never invent here” mentality will be proven, by history, to have been a very destructive one that will lose us half a generation of programmers.

If you’re a non-technical businessperson, or a CTO who’s been out of the code game for five years, what should you take away from this post? If your sense is that your engineers want to use existing, off-the-shelf software, then you should generally let them. I am certainly not saying that it is bad to do so. If the engineers believe that an existing asset will do a job better than they could do if they started from scratch, and they’re industrious and talented, they’re probably right. On the other hand, senior engineers will develop a desire to build and run their own projects, and they will agitate in order to get that opportunity. The short-termist, never-invent-here attitude that I’ve seen in far too many companies is likely to get in the way of that; you should remove it before it does. Of course, the matter of what to invent in-house is far more important than the ill-specified and vague question of “how much”; in general and on both, senior engineering talent can be trusted to figure that out.

In that light, we get to the fundamental reason why “never invent here” is so much more toxic than its opposite. A “not invented here” culture is one in which engineers misuse freedom, or in which managers misuse authority, and do a bit of unnecessary work. That’s not good. But the “never invent here” culture is one in which engineers are out of power, and therefore aren’t trusted to decide when to use third-party assets and when to build from scratch. It’s business-driven engineering, which means that the passengers are flying the plane, and that’s never a good thing.