The tyranny of the friendless

I’ve written a lot about the myriad causes of organizational decay. I wrote a long series on the topic, here. In most of my work, I’ve written about decay as an inevitable, entropic outcome driven by a number of forces, many unnamed and abstract, and therefore treated as inexorable ravages of time.

However, I’ve recently come to the realization that organizational decay is typically dominated by a single factor that is easy to understand, being so core to human sociology. While it’s associated with large companies, it can set in when they’re small. It’s a consequence of in-group exclusivity. Almost all organizations function as oligarchies, some with formal in-crowds (government officials or titled managers) and some without. If this in-crowd develops a conscious desire to exclude others, it will select and promote people who are likely to retain and even guard its boundaries. Only a certain type of person is likely to do this: friendless people. Those who dislike, and are disliked by, the out-crowd are unlikely to let anyone else in. They’re non-sticky: they come with a promise of “You get just me”, and that makes them very attractive candidates for admission into the elite.

Non-stickiness is seen as desirable from above– no one wants to invite the guy who’ll invite his whole entourage– but, in the business world, it’s negatively correlated with pretty much any human attribute that could be considered a virtue. People who are good at their jobs are more likely to be well-liked and engaged and form convivial bonds. People who are socially adept tend to have friends at high levels and low. People who care a lot about social justice are likely to champion the poor and unpopular. A virtuous person is more likely to be connected laterally and from “below”. That shouldn’t count against a person, but for an exclusive club that wants to stay exclusive, it does. What if he brings his friends in, and changes the nature of the group? What if his conscience compels him to spill in-group secrets? For this reason, the non-sticky and unattached are better candidates for admission.

The value that executive suites place on non-stickiness is one of many possible explanations for managerial mediocrity as it creeps into an organization. Before addressing why I think my theory is right, I need to analyze three of the others, all styled as “The $NAME Principle”.

The “Peter Principle” is the claim that people are promoted up to their first level of incompetence, and stay there. It’s an attractive notion, insofar as most people have seen it in action. There are terminal middle managers who don’t seem like they’ll ever gain another step, but who play politics just well enough not to get fired. (It sucks to be beneath one. He’ll sacrifice you to protect his position.) That said, I find the Peter Principle, in general, to be mostly false because of its implicit belief in corporate meritocracy. What is most incorrect about it is the belief that upper-level jobs are harder or more demanding than those in the middle. In fact, there’s an effort thermocline in almost every organization. Above the effort thermocline, which is usually the de facto delineation between mere management roles and executive positions, jobs get easier and less accountable with increasing rank. If the one-tick-late-but-like-clockwork Peter Principle were the sole limiting factor on advancement, you’d expect that those who pass the the thermocline would all become CEOs, and that’s clearly not the case. While merit and hard work are required less and less with increasing rank, political resistance amplifies just because there are so few of the top jobs that there’s bound to be competition. Additionally, even below the effort thermocline there are people employed below their maximum level of competence because of political resistance. The Peter Principle is too vested in the idea of corporate meritocracy to be accurate.

Scott Adams has proposed an alternative theory of low-merit promotion: the Dilbert Principle. According to it, managers are often incompetent line workers who were promoted “out of harm’s way”. I won’t deny that it exists in some organizations, although it usually isn’t applied within critical divisions of the company. When incompetents are knowingly promoted, it’s usually a dead-end pseudo-promotion that comes with a small pay raise and a title bump, but lateral movement into unimportant work. That said, its purpose isn’t just to limit damage, but to make the person more likely to leave. If someone’s not bad enough to fire but not especially good, gilding his CV with a fancy title might invite him to (euphemism?) succeed elsewhere… or, perhaps, not-succeed elsewhere but be someone else’s problem. All of that said, this kind of move is pretty rare. Incompetent people who are politically successful are not known to be incompetent, because politics-of-performance outweighs actual performance ten-to-one in terms of making reputations, and those who have a reputation for incompetence are those who failed politically, and they don’t get exit promotions. They just get fired.

The general idea that people are made managers to limit their damage potential is false because the decision to issue such promotions is one that would, by necessity, be made by other managers. As a tribe, managers have far too much pride to ever think the thought, “he’s incompetent, we must make him one of us”. Dilbert-style promotions occasionally occur and incompetents definitely get promoted, but the intentional promotion of incompetents into important roles is extremely rare.

Finally, there’s the Gervais Principle, developed by Venkatesh Rao, which asserts that organizations respond both to performance and talent, but sometimes in surprising ways. Low-talent high-performers (“eager beavers” or “Clueless”) get middle management roles where they carry the banner for their superiors, and high-talent low-performers (“Sociopaths”) either get groomed for upper-management or get fired. High-talent high-performers aren’t really addressed by the theory, and there’s a sound reason why. In this case, the talent that matters most is strategy: not working hard necessarily, but knowing what is worth working on. High talent people will, therefore, work very hard when given tasks appropriate to their career goals and desired trajectory in the company, but their default mode will be to slack on the unimportant make-work. So a high-talent person who is not being tapped for leadership will almost certainly be a low performer: at least, on the assigned make work that is given to those not on a career fast track.

The Gervais/MacLeod model gives the most complete assessment of organizational functioning, but it’s not without its faults. Intended as satire, the MacLeod cartoon gave unflattering names to each tier (“Losers” at the bottom, “Clueless” in middle-management, and “Sociopaths” at the top). It also seems to be a static assertion, while the dynamic behaviors are far more interesting. How do “Sociopaths” get to the top, since they obviously don’t start there? When “Clueless” become clued-in, where do they go? What do each of these people really want? For how long do “Losers” tolerate losing? (Are they even losing?) Oh, and– most importantly for those of us who are studying to become more like the MacLeod Sociopaths (who aren’t actually sociopathic per se, but risk-tolerant, motivated, and insubordinate)– what determines which ones are groomed for leadership and which ones are fired?

If there’s an issue with the Gervais Principle, it’s that it asserts too much intelligence and intent within an organization. No executive ever says, “that kid looks like a Sociopath; let’s train him to be one of us.” The Gervais model describes the stable state of an organization in advanced decline, but doesn’t (in my opinion) give full insight into why things happen in the way that they do.

So I’m going to offer a fourth model of creeping managerial mediocrity. Unlike the Peter Principle, it doesn’t believe in corporate meritocracy. Unlike the Dilbert Principle, it doesn’t assert that managers are stupid and unimportant (because we know both to be untrue) or consider their jobs to be such. Unlike the Gervais Principle, it doesn’t believe that organizations knowingly select for cluelessness or sociopathy (although that is sometimes the case).

  • The Lumbergh Principle: an exclusive sub-organization, such as an executive suite, that wishes to remain exclusive will select for non-stickiness, which is negatively correlated with most desirable personal traits. Over time, this will degrade the quality of people in the leadership ranks, and destroy the organization.

If it’s not clear, I named this one after Bill Lumbergh from Office Space. He’s uninspiring, devoid of charisma, and seems to hold the role of middle manager for an obvious reason: there is no chance that he would ever favor his subordinates’ interests over those of upper management. He’s friendless, and non-sticky by default. He wouldn’t, say, tell an underling that he’s undervalued and should ask for a 20% raise, or give advance notice of a project cancellation or layoff so a favored subordinate can get away in time. He’ll keep his boss’s secrets because he just doesn’t give a shit about the people who are harmed by his doing so. No one likes him and he likes no one, and that’s why upper management trusts him.

Being non-sticky and being incompetent aren’t always the same thing, but they tend to correlate often enough to represent a common case (if not the most common case) of an incompetent’s promotion. Many people who are non-sticky are that way because they’re disliked and alienating to other people, and while there are many reasons why that might be, incompetence is a common one. Good software engineers are respected by their peers and tend to make friends at the bottom. Bad software engineers who play politics and manage up will be unencumbered by friends at the bottom.

To be fair, the desire to keep the management ranks exclusive is not the only reason why non-stickiness is valued. A more socially acceptable reason for it is that non-sticky people are more likely to be “fair” in an on-paper sense of the word. They don’t give a damn about their subordinates, their colleagues, and possible future subordinates, but they don’t-give-a-damn equally. Because of this, they support the organization’s sense of itself as a meritocracy. Non-sticky people are also, in addition to being “fair” in a toxic way that ultimately serves the needs of upper management only, quite consistent. As corporations would rather be consistent than correct– firing the wrong people (i.e. firing competent people) is unfortunate, but firing inconsistently opens the firm to a lawsuit– they are attractive for this reason as well. You can always trust the non-sticky person, even if he’s disliked by his colleagues for a good reason, to favor the executives’ interests above the workers’. The fact that most non-sticky people absolutely suck as human beings is held to be irrelevant.

Exceptions

As I get older and more experienced, I’m increasingly aware that there’s a lot of diversity in how organizations run themselves. We’re not condemned to play out roles of “Loser”, “Clueless”, or “Sociopath”. So it’s worth acknowledging that there are a lot of cases in which the Lumbergh Principle doesn’t apply. Some organizations try to pick competent leaders, and it’s not inevitable that an organization develops such a contempt for its own workers as to define the middle-management job in such a start way. Also, the negativity that is often directed at middle management fails to account for the fact that upper management almost always has to pass through that tier in some way or another. Middle management gets its stigma because of the terminal middle managers with no leadership skills; the ones promoted into those roles because, while defective, their superiors could trust them. However, there are many other reasons why people pass through middle management roles, or take them on because they believe that the organization needs them to do so.

The Lumbergh Principle only takes hold in a certain kind of organization. That’s the good news. The bad news is that most organizations are that type. It has to do with internal scarcity. At some point, organizations decide that there’s a finite amount of “goodness”, whether we’re talking about autonomy or trust or credibility, that exists and it leaves people to compete for these artificially limited benefits. Employee stack ranking is a perfect example of this: for one person to be a high performer, another must be marked low. When a scarcity mentality sets in, R&D is slashed and business units are expected to compete for internal clients in order to justify themselves, which means that these “intrapreneurial” efforts face the worst of both worlds between being a startup and being in a large organization. It invariably gets ugly, and a zero-sum mentality takes hold. At this point, the goal of the executive suite becomes maintaining position rather than growing the company, and invitation into the inner circle (and the concentric circles that comprise various tiers of management) are given only to replace outgoing personnel, with a high degree of preference given to those who can be trusted not to let conscience get in in the way of the executives’ interests.

One might expect that startups would be a way out. Is that so? The answer is: sometimes. It is generally better, under this perspective, for an organization to be growing than stagnant. It’s probably better, in many cases, to be small. At five people, it’s far more typical to see the “live or die as a group” mentality than the internal adversity that characterizes large organizations. All of that said, there are quite a number of startups that already operate under a scarcity mentality, even from inception. The VCs want it that way, so they demand extreme growth and risk-seeking on (relative to the ambitions they require) a shoestring budget and call it “scrappy” or “lean”. The executives, in turn, develop a culture of stinginess wherein small expenses are debated endlessly. Then the middle managers bring in that “Agile”/Scrum bukkake in which programmers have to justify weeks and days of their own fucking working time in the context of sprints and iterations and glass toys. One doesn’t need a big, established company to develop the toxic scarcity mentality that leads to the Lumbergh Effect. It can start at a company’s inception– something I’ve seen on multiple occasions. In that case, the Lumbergh Effect exists because the founders and executives have a general distrust for their own people. That said, the tendency of organizations (whether democratic or monarchic on paper) toward oligarchy means that they need to trust someone. Monarchs need lieutenants, and lords need vassals. The people who present themselves as natural candidates for promotion are the non-sticky ones who’ll toss aside any allegiances to the bottom. However, those people are usually non-sticky because they’re disliked, and they’re usually disliked because they’re unlikeable and incompetent. It’s through that dynamic– not intent– that most companies end up being middle-managed (and, after a few years, upper-managed) by incompetents.

Advanced Lumberghism

What makes the Lumbergh Principle so deadly is the finality of it. The Peter Principle, were it true, would admit an easy solution: just fire the people who’ve plateaued. (Some companies do exactly that, but it creates a toxic culture of stack-ranking and de facto age discrimination.) The Dilbert Principle has a similar solution: if you are going to “promote” someone into a dead end, as a step in managing that person out, make sure to follow through. As for the Gervais Principle, it describes an organization that is already in an advanced state of dysfunction (but, it is so useful because most organizations are in such states) and, while it captures the static dynamics (i.e. the microstate transitions and behaviors in a certain high-entropy, degenerate macrostate) it does not necessarily tell us why decay is the norm for human organizations. I think that the Lumbergh Effect, however, does give us a cohesive sense of it. It doesn’t go far enough to say that “the elite” is the problem, because while elites are generally disliked, they’re not always harmful. The Lumbergh Effect sets in when the elite’s desire to protect its boundaries results in the elevation of a middling class of non-virtuous people, and as such people become the next elite (through attrition in the existing one) the organization falls to pieces. We now know, at least in a large proportion of cases, the impulses and mechanics that bring an organization to ruin.

Within organizations, there’s always an elite. Governments have high officials and corporations have executives. We’d like for that elite to be selected based on merit, but even people of merit dislike personal risk and will try to protect their positions. Over time, elites form their own substructures, and one of those is an outer “shell”. The lowest-ranking people inside that elite, and the highest-ranking people outside of it who are auditioning to get in, take on guard duty and form the barrier. Politically speaking, the people who live at that shell (not the cozy people inside or the disengaged outsiders who know they have no chance of entering) will be the hardest-working (again, an effort thermocline) at defining and guarding the group’s boundaries. Elites, therefore, don’t recruit for their “visionary” inner ranks or their middling directorate, because you have to serve at the shell before you have a chance of getting further in. Rather, they recruit guards: non-sticky people who’ll keep the group’s barriers (and its hold over the resources, information, and social access that it controls) impregnable to outsiders. The best guards, of course, are those who are loyal upward because they have no affection in the lateral or downward directions. And, as discussed, such people tend to be that way because no one likes them where they are. That this leads organizations to the systematic promotion of the worst kinds of people should surprise no one.

Can tech fix its broken culture?

I’m going to spoil the ending. The answer is: yes, I think so. Before I tackle that matter, though, I want to address the blog post that led me to write on this topic. It’s Tim Chevalier’s “Farewell to All That” essay about his departure from technology. He seems to have lost faith in the industry, and is taking a break from it. It’s worth reading in its entirety. Please do so, before continuing with my (more optimistic) analysis.

I’m going to analyze specific passages from Chevalier’s essay. It’s useful to describe exactly what sort of “broken culture” we’re dealing with, in order to replace a vague “I don’t like this” with a list of concrete grievances, identifiable sources and, possibly, implementable solutions.

First, he writes:

I have no love left for my job or career, although I do have it for many of my friends and colleagues in software. And that’s because I don’t see how my work helps people I care about or even people on whom I don’t wish any specific harm. Moreover, what I have to put up with in order to do my work is in danger of keeping me in a state of emotional and moral stagnation forever.

This is a common malaise in technology. By the time we’re 30, we’ve spent the better part of three decades building up potential and have refined what is supposed to be the most important skill of the 21st century. We’d like to work on clean energy or the cure for cancer or, at least, creating products that change and save lives (like smart phones, which surely have). Instead, most of us work on… helping businessmen unemploy people. Or targeting ads. Or building crappy, thoughtless games for bored office workers. That’s really what most of us do. It’s not inspiring.

Technologists are fundamentally progressive people. We build things because we want the world to be better tomorrow than it is today. We write software to solve problems forever. Yet most of what our employers actually make us do is not congruent with the progressive inclination that got us interested in technology in the first place. Non-technologists cannot adequately manage technologists because technologists value progress, while non-technologists tend to value subordination and stability.

Open source is the common emotional escape hatch for unfulfilled programmers, but a double-edged sword. In theory, open-source software advances the state of the world. In practice, it’s less clear cut. Are we making programmers (and, therefore, the world) more productive, or are we driving down the price of software and consigning a generation to work on shitty, custom, glue-code projects? This is something that I worry about, and I don’t have the answer. I would almost certainly say that open-source software is very much good for the world, were it not for the fact that programmers do need to make money, and giving our best stuff away for free just might be hurting the price for our labor. I’m not sure. As far as I can tell, it’s impossible to measure that counterfactual scenario.

If there’s a general observation that I’d make about software programmers, and technologists in general, it’s that we’re irresponsibly adding value. We create so much value that it’s ridiculous, and so much that, by rights, we ought to be calling the shots. Yet we find value-capture to be undignified and let the investors and businessmen handle that bit of the work. So they end up with the authority and walk away with the lion’s share; we’re happy if we make a semi-good living. The problem is that value (or money) becomes power, and the bulk of the value we generate accrues not to people who share our progressive values, but to next-quarter thinkers who end up making the world more ugly. We ought to fix this. By preferring ignorance over how the value we generate is distributed and employed, we’re complicit in widespread unemployment, mounting economic and political inequality, and the general moral problem of the wrong people winning.

I don’t spend much time solving abstract puzzles, at least not in comparison to the amount of time I spend doing unpaid emotional labor.

Personally, I care more about solving real-world problems and making peoples’ lives better than I do about “abstract puzzles”. It’s fun to learn about category theory, but what makes Haskell exciting is that its core ideas actually work at making quickly developed code robust beyond what is possible (within the same timeframe; JPL-style C is a different beast) in other languages. I don’t find much use in abstract puzzles for their own sake. That said, the complaint about “unpaid emotional labor” resonates with me, though I might use the term “uncompensated emotional load”. If you work in an open-plan office, you’re easily losing 10-15 hours of your supposedly free time just recovering from the pointless stress inflicted by a bad work environment. I wouldn’t call it an emotional “labor”, though. Labor implies conscious awareness. Recovering from emotional load is draining, but it’s not a conscious activity.

But the tech industry is wired with structural incentives to stay broken. Broken people work 80-hour weeks because we think we’ll get approval and validation for our technical abilities that way. Broken people burn out trying to prove ourselves as hackers because we don’t believe anyone will ever love us for who we are rather than our merit.

He has some strong points here: the venture-funded tech industry is designed to give a halfway-house environment for emotionally stunted (I wouldn’t use the word “broken”, because immaturity is very much fixable) fresh college grads. That said, he’s losing me on any expectation of “love” at the workplace. I don’t want to be “loved” by my colleagues. I want to be respected. And respect has to be earned (ideally, based on merit). If he wants unconditional love, he’s not going to find that in any job under the sun; he should get a dog, or a cat. That particular one isn’t the tech industry’s fault.

Broken people believe pretty lies like “meritocracy” and “show me the code” because it’s easier than confronting difficult truths; it’s as easy as it is because the tech industry is structured around denial.

Meritocracy is a useless word and I think that it’s time for it to die, because even the most corrupt work cultures are going to present themselves as meritocracies. The claim of meritocracy is disgustingly self-serving for the people at the top.

“Show me the code” (or data) can be irksome, because there are insights for which coming up with data is next to impossible, but that any experienced person would share. That said, data- (or code-)driven decision making is better than running on hunches, or based on whoever has the most political clout. What I can’t stand is when I have to provide proof but someone else doesn’t. Or when someone decides that every opinion other than his is “being political” while his is self-evident truth. Or when someone in authority demands more data or code before making a ruling, then goes on to punish you for getting less done on your assigned work (because he really doesn’t want you to prove him wrong). Now those are some shitty behaviors.

I generally agree that not all disputes can be resolved with code or data, because some cases require a human touch and experience; that said, there are many decisions that should be handled in exactly that way: quantitatively. What irks me is not a principled insistence on data-driven decisions, but when people with power acquire the right to make everyone else provide data (which may be impossible to come by) while remaining unaccountable, themselves, to do the same. And many of the macho jerks who overplay the “show me the code” card (because they’ve acquired undeserved political power), when code or data are too costly to acquire. are doing just that.

A culture that considers “too sensitive” an insult is a culture that eats its young. Similarly, it’s popular in tech to decry “drama” when no one is ever sure what the consensus is on this word’s meaning, but as far as I can tell it means other people expressing feelings that you would prefer they stay silent about.

I dislike this behavior pattern. I wouldn’t use the word “drama” so much as political. Politically powerful bad actors are remarkably good at creating a consensus that their political behaviors are apolitical and “meritocratic”, whereas people who disagree with or oppose them are “playing politics” and “stirring up drama”. False objectivity is more dangerous than admitted subjectivity. The first suits liars, the second suits people who have the courage to admit that they are fallible and human.

Personally, I tend to disclose my biases. I can be very political. While I don’t value emotional drama for its own sake, I dislike those who discount emotion. Emotions are important. We all have them, and they carry information. It’s up to us to decide what to do with that information, and how far we should listen to emotions, because they’re not always wise in what they tell us to do. There is, however, nothing wrong with having strong emotions. It’s when people are impulsive, arrogant, and narcissistic enough to let their emotions trample on other people that there is a problem.

Consequently, attempting to shut one’s opponent down by accusing him of being “emotional” is a tactic I’d call dirty, and it should be banned. We’re humans. We have emotions. We also have the ability (most of the time) to put them in place.

“Suck it up and deal” is an assertion of dominance that disregards the emotional labor needed to tolerate oppression. It’s also a reflection of the culture of narcissism in tech that values grandstanding and credit-taking over listening and empathizing.

This is very true. “Suck it up and deal” is also dishonest in the same way that false objectivity and meritocracy are. The person saying it is implicitly suggesting that she suffered similar travails in the past. At the same time, it’s a brush-off that indicates that the other person is of too low status for it to be worthwhile to assess why the person is complaining. It says, “I’ve had worse” followed by “well, I don’t actually know that, because you’re too low on the food chain for me to actually care what you’re going through.” It may still be abrasive to say “I don’t care”, but at least it’s honest.

Oddly enough, most people who have truly suffered fight hard to prevent others from having similar experiences. I’ve dealt with a lot of shit coming up in the tech world, and the last thing I would do is inflict it on someone else, because I know just how discouraging this game can be.

if you had a good early life, you wouldn’t be in tech in the first place.

I don’t buy this one. Some people are passionate about software quality, or about human issues that can be solved by technology. Not everyone who’s in this game is broken.

There certainly are a lot of damaged people working in private-sector tech, and the culture of the VC-funded world attracts broken people. What’s being said here is probably 80 or 90 percent true, but there are a lot of people in technology (especially outside of the VC-funded private sector tech that’s getting all the attention right now) who don’t seem more ill-adjusted than anyone else.

I do think that the Damaso Effect requires mention. On the business side of tech (which we report into) there are a lot of people who really don’t want to be there. Venture capital is a sub-sector of private equity and considered disreputable within that crowd: it’s a sideshow to them. Their mentality is that winners work on billion-dollar private equity deals in New York and losers go to California and boss nerds around. And for a Harvard MBA to end up as a tech executive (not even an investor!) is downright embarrassing. So that Columbia MBA who’s a VP of HR at a 80-person ed-tech startup is not exactly going to be attending reunions. This explains the malaise that programmers often face as they get older: we rise through the ranks and see that, if not immediately, we eventually report up into a set of people who really don’t want to be here. They view being in tech as a mark of failure, like being relegated to a colonial outpost. They were supposed to be MDs at Goldman Sachs, not pitching business plans to clueless VCs and trying to run a one-product company on a shoestring (relative to the level of risk and ambition that it takes to keep investors interested) budget.

That said, there are plenty of programmers who do want to be here. They’re usually older and quite capable and they don’t want to be investors or executives, though they often could get invited to those ranks if they wished. They just love solving hard problems. I’ve met such people; many, in fact. This is a fundamental reason why the technology industry ought to be run by technologists and not businessmen. The management failed into it and would jump back into MBA-Land Proper if the option were extended, and they’re here because they’re the second or third tier that got stuck in tech; but the programmers in tech actually, in many cases, like being here and value what technology can do.

Failure to listen, failure to document, and failure to mentor. Toxic individualism — the attitude that a person is solely responsible for their own success, and if they find your code hard to understand, it’s their fault — is tightly woven through the fabric of tech.

This is spot-on, and it’s a terrible fact. It holds the industry back. We have a strong belief in progress when it comes to improving tools, adding features to a code base, and acquiring more data. Yet the human behaviors that enable progress, we tend to undervalue.

But in tech, the failures are self-reinforcing because failure often has no material consequences (especially in venture-capital-funded startups) and because the status quo is so profitable — for the people already on the inside — that the desire to maintain it exceeds the desire to work better together.

This is an interesting observation, and quite true. The upside goes mostly to the well-connected. Most of the Sand Hill Road game is about taking strategies (e.g. insider trading, market manipulation) that would be illegal on public markets and applying them to microcap private equities over which there are fewer rules. The downside is borne by the programmers, who suffer extreme costs of living and a culture of age discrimination on a promise of riches that will usually never come. As of now, the Valley has been booming for so long that many people have forgotten that crashes and actual career-rupturing failures even exist. In the future… who knows?

As for venture capital, it delivers private prosperity, but its returns to passive investors (e.g. the ones whose money is being invested, as opposed to the VCs collecting management fees) are dreadful. This industry is not succeeding, except according to the needs of the well-connected few. What’s happening is not “so profitable” at all. It’s not actually very successful. It’s just well-marketed, and “sexy”, to people under 30 who haven’t figured out what they want to do with their lives.

I remember being pleasantly amazed at hearing that kind of communication from anybody in a corporate conference room, although it was a bit less nice when the CTO literally replied with, “I don’t care about hurt feelings. This is a startup.”

That one landed. I have seen so many startup executives and founders justify bad behavior with “this is a startup” or “we’re running lean”. It’s disgusting. It’s the False Poverty Effect: people who consider themselves poor based on peer comparison will tend to believe themselves entitled to behave badly or harm others because they feel like it’s necessary in order to catch up, or that their behavior doesn’t matter because they’re powerless compared to where they should be. It usually comes with a bit of self-righteousness, as well: “I’m suffering (by only taking a $250k salary) for my belief in this company.”The false-poverty behavior is common in startup executives, because (as I already discussed) they’d much rather be elsewhere– executives in much larger companies, or in private equity.

I am neither proud of nor sorry for any of these lapses, because ultimately it’s capitalism’s responsibility to make me produce for it, and within the scope of my career, capitalism failed. I don’t pity the ownership of any of my former employers for not having been able to squeeze more value out of me, because that’s on them.

I have nothing to say other than that I loved this. Ultimately, corporate capitalism fails to be properly capitalistic because of its command-economy emphasis on subordination. When people are treated as subordinates, they slack and fade. This hurts the capitalist more than anyone else.

Answering the question

I provided commentary on Tim Chevalier’s post because not only is he taking on the tech industry, but he’s giving proof to his objection by leaving it. Tech has a broken culture, but it’s not enough to issue vague complaints as many do. It’s not just about sexism and classism and Agile and Java Shop politics in isolation. It’s about all of that shit, taken together. It’s about the fact that we have a shitty hand-me-down culture from those who failed out of the business mainstream (“MBA Culture”) and ended up acquiring its worst traits (e.g. sexism, ageism, anti-intellectualism). It’s about the fact that we have this incredible skill in being able to program, and yet 99 percent of our work is reduced to a total fucking joke because the wrong people are in charge. If we care about the future at all, we have to fight this.

Fixing one problem in isolation, I’ll note, will do no good. This is why I can’t stand that “lean in” nonsense that is sold to unimaginative women who want some corporate executive to solve their problems. You cannot defeat the systemic problems that disproportionately harm women, and maintain the status quo at the same time. You can’t take an unfair, abusive system designed to concentrate power and “fix” it so that it is more fair in one specific way, but otherwise operates under the same rules. You can’t have a world where it is career suicide to take a year off of work for any reason except to have a baby. If you maintain that cosmetic obsession with recency, you will hurt women who wish to have children. You have to pick: either accept the sexism and ageism and anti-intellectualism and the crushing mediocrity of what is produced… or overthrow the status quo and change a bunch of things at the same time. I know which one I would vote for.

Technology is special in two ways, and both of these are good news, at least insofar as they bear on what is possible if we get our act together. The first is that it’s flamingly obvious that the wrong people are calling the shots. Look at many of the established tech giants. In spite of having some of the best software engineers in the world, many of these places use stack ranking. Why? They have an attitude that software engineering is “smart people work” and that everything else– product management, people management, HR– is “stupid people work” and this becomes a self-fulfilling prophecy. You get some of the best C++ engineers in the world, but you get stupid shit like stack ranking and “OKRs” and “the 18-month rule” from your management.

It would be a worse situation to have important shots called by idiots and not have sufficient talent within our ranks to replace them. But we do have it. We can push them aside, and take back our industry, if we learn how to work together rather than against each other.

The second thing to observe about technology is that it’s so powerful as to admit a high degree of mismanagement. If we were a low-margin business, Scrum would kill rather than merely retard companies. Put simply, successful applications of technology generate more wealth than anyone knows what to do with. This could be disbursed to employees, but that’s rare: for most people in startups, their equity slices are a sad joke. Some of it will be remitted to investors and to management. A great deal of that surplus, however, is spent on management slack: tolerating mismanagement at levels that would be untenable in an industry with a lower margin. For example, stack-ranking fell out of favor after it caused the calamitous meltdown of Enron, and “Agile”/”Scrum” is a resurrection of Taylorist pseudo-science that was debunked decades ago. Management approaches that don’t work, as their proponents desperately scramble for a place to park them, end up in tech. This leaves our industry, as a whole, running below quarter speed and still profitable. Just fucking imagine how much there would be to go around, if the right people were calling the shots.

In light of the untapped economic potential that would accrue to the world if the tech industry were better run, and had a better culture, it seems obvious that technology can fix the culture. That said, it won’t be easy. We’ve been under colonial rule (by the business mainstream) for a long time. Fixing this game, and eradicating the bad behaviors that we’ve inherited from our colonizing culture (which is more sexist, anti-progressive, anti-intellectual, classist and ageist than any of our natural tendencies) will not happen overnight. We’ve let ourselves be defined, from above, as arrogant and socially inept and narcissistic, and therefore incapable of running our own affairs. That, however, doesn’t reflect what we really are, nor what we can be.

The Sturgeon Filter: the cognitive mismatch between technologists and executives

There’s a rather negative saying, originally applied to science fiction, known as Sturgeon’s Law: “ninety percent of everything is crap“. Quantified so generally, I don’t think that it’s true or valuable. There are plenty of places where reliability can be achieved and things “just work”. If ninety percent of computers malfunctioned, the manufacturer would be out of business, so I don’t intend to apply the statement to everything. Still, there’s enough truth in the saying that people keep using it, even applying it far beyond what Theodore Sturgeon actually intended. How far is it true? And what does it mean for us in our working lives?

Let’s agree to take “ninety percent” to be a colloquial representation of “most, and it’s not close”; typically, between 75 and 99 percent. What about “is crap”? Is it fair to say that most creative works are crap? I wouldn’t even know where to begin on that one. Certainly, I only deign to publish about a quarter of the blog posts that I write, and I think that that’s a typical ratio for a writer, because I know far too well how often an appealing idea fails when taken into the real world. I think that most of the blog posts that I actually release are good, but a fair chunk of my writing is crap that, so long as I’m good at self-criticism, will never see daylight.

I can quote a Sturgeon-like principle with more confidence, in such a way that preserves its essence but is hard to debate: the vast majority (90 percent? more?) of mutations are of negative value and, if implemented, will be harmful. This concept of “mutation” covers new creative work as well as maintenance and refinement. To refine something is to mutate it, while new creation is still a mutation of the world in which it lives. And I think that my observation is true: a few mutations are great, but most are harmful or, at least, add complexity and disorder (entropy). In any novel or essay, changing a word at random will probably degrade its quality. Most “house variants” of popular games are not as playable as the original game, or are not justified by the increased complexity load. To mutate is, usually, to inflict damage. Two things save us and allow progress. One is that the beneficial mutations often pay for the failures, allowing macroscopic (if uneven) progress. The second is that we can often audit mutations and reverse a good number of those that turn out to be bad. Version control, for programmers, enables us to roll back mutations that are proven to be undesirable.

The Sturgeon Mismatch

Programmers experience the negative effects of random mutations all the time. We call them “bugs”, and they range from mild embarrassments to catastrophic failures, but very rarely is a discrepancy between what the programmer expects of the program, and what it actually does, desirable. Of course, intended mutations have a better success rate than truly “random” ones would, but even in those, there is a level of ambition at which the likelihood of degradation is high. I know very little about the Linux kernel and if I tried to hack it, my first commits would probably be rejected, and that’s a good thing. It’s only the ability to self-audit that allows the individual programmer, on average, to improve the world while mutating it. It can also help to have unit tests and, if available for the language, a compiler and a strong type system; those are a way to automate at least some of this self-censoring.

I’m a reasonably experienced programmer at this point, and I’m a good one, and I still generate phenomenally stupid bugs. Who doesn’t? Almost all bugs are stupid– tiny, random injections of entropy emerging from human error– which is why the claim (for example) that “static typing only catches ‘stupid’ bugs” is infuriating. What makes me a good programmer is that I know what tools and processes to use in order to catch them, and this allows me to take on ambitious projects with a high degree of confidence in the code I’ll be able to write. I still generate bugs and, occasionally, I’ll even come up with a bad idea. I’m also very good at catching myself and fixing mistakes quickly. I’m going to call this selective self-censoring that prevents 90 percent of one’s output from being crap the Sturgeon Filter.

With a strong Sturgeon Filter, you can export the good mutations and bury the bad ones. This is how reliability (either in an artistic masterpiece, orin  a correct, efficient program) can be achieved by unreliable creatures such as humans. I’d further argue that to be a competent programmer requires a strong Sturgeon Filter. The good news is that this filter is built up fairly quickly by tools that give objective feedback: compilers and computers that follow instructions literally, and malfunction at the slightest mistake. As programmers, we’re used to having our subordinates (compilers) tell us, “Fix your shit or I’m not doing anything.”

It’s no secret that most programmers dislike management, and have a generally negative view of the executives and “product managers” running most of the companies that employ them. This is because programmers pride themselves on having almost impermeable Sturgeon Filters, while lifelong managers have nonexistent Sturgeon Filters. They simply don’t get the direct, immediate feedback that would train them to recognize and reject their own bad ideas. That’s not because they’re stupider than we are. I don’t actually think that they are. I think that their jobs never build up the sense of fallibility that programmers know well.

Our subordinates, when given nonsensical instructions, give blunt, tactless feedback– and half the time they’re just pointing out spelling errors that any human would just ignore! Managers’ subordinates, however, are constantly telling them that they’re awesome, and will often silently clean up their mistakes. Carry this difference in experience out over 20 years or more, and you get different cultures and different attitudes. You get 45-year-old programmers who, while extraordinarily skillful, are often deeply convinced of their own fallibility; and you get 45-year-old executives who’ve never really failed or suffered at work, because even when they were bad at their jobs, they had armies of people ready to manage their images and ensure that, even in the worst case scenario where they lost jobs, they’d “fail up” into a senior position in another company.

Both sides now

Programmers and managers both mutate things; it’s the job. Programmers extend and alter the functionality of machines, while managers change the way people work. In both cases, the effects of a random mutation, or even an unwise intended one, are negative. Mutation for its own sake is undesirable.

For example, scheduling a meeting without a purpose is going to waste time and hurt morale. Hiring bad people and firing good ones will have massive repercussions. To manage at random (i.e. without a Sturgeon Filter) is almost as bad as to program at random. Only a small percentage of the changes to the way that people work that managers propose are actually beneficial. Most status pings or meetings serve no value except to allay the creeping sense of the manager that he isn’t “doing enough”, most processes that exist for executive benefit or “visibility” are harmful, and a good 90 to 99 percent of the time, the people doing the work have better ideas about how they should do it than the executives shouting orders. Managers, in most companies, interrupt and meddle on a daily basis, and it’s usually to the detriment of the work being produced. Jason Fried covers this in this talk, “Why work doesn’t happen at work”. As he says, “the real problems are … the M&Ms: the managers and the meetings”. Managers are often the last people to recognize the virtue of laziness: that constantly working (i.e. telling people what to do) is a sign of distress, while having little to do generally means that they’re doing their jobs well.

In the past, there was a Sturgeon Filter imposed by time and benign noncompliance. Managers gave bad orders just as often as they do now, but there was a garbage-collection mechanism in place. People followed the important orders, which were usually congruent already with common sense and basic safety, but when they were given useless orders or pointless rules to follow, they’d make a show of following the new rules for a month or two, then discard them when they failed to show any benefit or improvement. Many managers, I would imagine, preferred this, because it allowed them to have the failed change silently rejected without having any attention drawn to their mistake. In fact, a common mode of sub-strike resistance used in by organized labor is “the rule-follow“, a variety of slowdown in which rules were followed to the letter, resulting in low productivity. Discarding senseless rules (while following the important, safety-critical ones) is a necessary behavior of everyone who works in an organization; a person who interprets all orders literally is likely to perform at an unacceptably low level.

In the past, the passage of time lent plausible deniability to a person choosing to toss out silly policies that would quite possibly be forgotten or regretted by the person who issued them. An employee could defensibly say that he followed the rule for three months, realized that it wasn’t helping anything and that no one seemed to care, and eventually just forgot about it or, better yet, interpreted a new order to supersede the old one. This also imposed a check on managers, who’d embarrass themselves by enforcing a stupid rule. Since no one has precise recall of a months-old conversation of low general importance, the mists of time imposed a Sturgeon Filter on errant management. Stupid rules faded and important ones (like, “Don’t take a nap in the baler”) remained.

One negative side effect of technology is that it has removed that Sturgeon Filter from existence. Too much is put in writing, and persists forever, and the plausible deniability of a worker who (in the interest of getting more done, not in slacking) disobeys it has been reduced substantially. In the past, an intrepid worker could protest a status meeting by “forgetting” to attend it on occasion, or by claiming he’d heard “a murmur” that it was discontinued, or even (if he really wanted to make a point) by taking colleagues out for lunch at a spot not known for speedy service and, thus, an impersonal force just happening to half the team late for it. While few workers actually did such things on a regular basis (to make it obvious would get a person just as fired then as today) the fact that they might do so imposed a certain back-pressure on runaway management that doesn’t exist anymore. In 2015, there’s no excuse for missing a meeting when “it’s on your fucking Outlook calendar!”

Technology and persistence have evolved, but management hasn’t. Programmers have looked at their job of “messing with” (or, to use the word above, mutating) computers and software systems and spent 70 years coming up with new ways to compensate for the unreliable nature that comes from our being humans. Consequently, we can build systems that are extremely robust in spite of having been fueled by an unreliable input (human effort). We’ve changed the computers, the types of code that we can write, and the tools we use to do it. Management, on the other hand, is still the same game that it always has been. Many scenes from Mad Men could be set in a 2010s tech company, and the scripts would still fit. The only major change would be in the costumes.

To see the effects of runaway management, combined with the persistence allowed by technology, look no further than the Augean pile of shit that has been given the name of “Agile” or “Scrum”. These are neo-Taylorist ideas that most of industry has rejected, repackaged using macho athletic terminology (“Scrum” is a rugby term). Somehow, these discarded, awful ideas find homes in software engineering. This is a recurring theme. Welch-style stack ranking turned out to be a disaster, as finally proven by its thorough destruction of Enron, but it lives on in the technology: Microsoft used it until recently, while Google and Amazon still do. Why is this? What has made technology such an elephant graveyard for disproven management theories and bad ideas in general?

A squandered surplus

The answer is, first, a bit of good news: technology is very powerful. It’s so powerful that it generates a massive surplus, and the work is often engaging enough that the people doing it fail to capture most of the value they produce, because they’re more interested in doing the work than getting the largest possible share of the reward. Because so much value is generated, they’re able to have an upper-middle-class income– and upper-working-class social status– in spite of their shockingly low value-capture ratio.

There used to be an honorable, progressive reason why programmers and scientists had “only” upper-middle-class incomes: the surplus was being reinvested into further research. Unfortunately, that’s no longer the case: short-term thinking, a culture of aggressive self-interest, and mindless cost-cutting have been the norm since the Reagan Era. At this point, the surplus accrues to a tiny set of well-connected people, mostly in the Bay Area: venture capitalists and serial tech executives paying themselves massive salaries that come out of other peoples’ hard work. However, a great deal of this surplus is spent not on executive-level (and investor-level) pay but into another, related, sink: executive slack. Simply put, the industry tolerates a great deal of mismanagement simply because it can do so and still be profitable. That’s where “Agile” and Scrum come from. Technology companies don’t succeed because of that heaping pile of horseshit, but in spite of it. It takes about five years for Scrum to kill a tech company, whereas in a low-margin business it would kill the thing almost instantly.

Where this all goes

Programmers and executives are fundamentally different in how they see the world, and the difference in Sturgeon Filters is key to understanding why it is so. People who are never told that they are wrong will begin to believe that they’re never wrong. People who are constantly told that they’re wrong (because they made objective errors in a difficult formal language) and forced to keep working until they get it right, on the other hand, gain an appreciation for their own fallibility. This results in a cultural clash from two sets of people who could not be more different.

To be a programmer in business is painful because of this mismatch: your subordinates live in a world of formal logic and deterministic computation, and your superiors live in the human world, which is one of psychological manipulation, emotion, and social-proof arbitrage. I’ve often noted that programming interviews are tough not because of the technical questions, but because there is often a mix of technical questions and “fit: questions in them, and while either category is not terribly hard on its own, the combination can be deadly. Technical questions are often about getting the right answer: the objective truth. For a contrast, “fit” questions like “What would you do if given a deadline that you found unreasonable?” demand a plausible and socially attractive lie. (“I would be a team player.”) Getting the right answer is an important skill, and telling a socially convenient lie is also an important skill… but having to context-switch between them at a moment’s notice is, for anyone, going to be difficult.

In the long term, however, this cultural divergence seems almost destined to subordinate software engineers, inappropriately, to business people. A good software engineer is aware of all the ways in which he might be wrong, whereas being an executive is all about being so thoroughly convinced that one is right that others cannot even conceive of disagreement– the “reality distortion field”. The former job requires building an airtight Sturgeon Filter so that crap almost never gets through; the latter mandates tearing down one’s Sturgeon Filter and proclaiming loudly that one’s own crap doesn’t stink.

Abandon “sorta high”/”P1″ priority.

It’s fairly common, in technology organizations, for there to be an elaborate hierarchy for prioritizing bugs and features, ranging from “P0″ (severely important) to “P4″ (not at all important) and for work items to be labelled as such. Of those five levels, the typical meanings are something like this:

  • P0: severely high priority. This gives management and, ideally, the team, the right to be pushy in getting the issue resolved.
  • P1: high priority. This is often an indecisive mid-level between P0 and P2.
  • P2: default (average) priority. This is the classification that most features and bugs are supposed to be given.
  • P3: low priority. Unlikely to be done.
  • P4: very low priority. The black hole.

I would argue for getting rid of three of those levels: P1, P3, and P4. I don’t think that they do much good, and I think that they can actually be confusing and harmful.

Against “P3″ and “P4″

P3 and P4, in most companies, are effectively “will not fix” labels. It would be an embarrassment for an engineer to work on a P3 or P4 issue, because the corporate fiction is that every worker will always work on the highest-priority task (to the company, of course). Ergo, labeling a bug or feature below the default priority means that anyone who performs it is signaling that he or she has no P2-or-higher work to do. That’s not a good look. Even it’s the case (of having no P2 or higher work) one does better by finding work that serves one’s own career goals than by completing low-priority work in the backlog The only way it becomes socially acceptable to complete a P3 or P4 item is if it happens incidentally in the process of achieving something else.

If the issue, despite being labelled as low in priority, becomes an irritation to the developer, it’s often best to silently fix it and then close the bug later saying, “heh, I already did this, 6 months ago.” To do that is to feign not having been aware of the item’s low given priority, and to appear so productive and prescient as to have silently fixed bugs or completed work without even knowing that others noticed them.

What’s wrong with P1?

The case for getting rid of “P1″ is that it’s fundamentally indecisive. As far as I’m concerned, there are two meaningful levels for a task: Default and Escalated. “Default” is exactly what it sounds like: the typical, average level for work items. Except in a crisis, the majority of feature requests and bug reports will be at this level. At the Default level, it’s the kind of work that can be done without informing or tipping off management: you just go and do the job.

The Escalated level is one that requires the double-edged sword of managerial power. The workers need management to get involved and tell other teams to do things for them, pronto. It means that management has to suspend the right of the workers (which, during normal conditions, they ought to have) to prioritize work as they see fit. Toes need to get stepped on, in other words, or the work can’t proceed in a timely manner (or, in many cases, at all). It goes without saying that this shouldn’t occur for unimportant or even default-priority work, because the manager expends political capital in interrupting people (often, people who report to someone else) and asking them to work on things other than what they’d prefer. Since people on other teams are going to take hits, and the manager is going to spend political capital, it’s only fair that the employees who decided to escalate the work shall also prioritize the task above other things. This means that if they face deadlines and status pings and a support burden, they did at least ask for it by ringing the alarm bell.

In essence, the P0/Escalated level tips off management that the work item is very important, and pushes the manager to interrupt other teams and make demands of people elsewhere in the company, in order to have the task completed quickly. It’s a fair trade: information (that may be used against the worker or team, in the form of deadlines, micromanagement, follow-on work, meetings and future questioning) is given to management, but the manager is expected to pound on doors, get resources, and use his or her political power to eradicate bureaucratic hurdles (“blockers”). Setting a task at P2/Default level, on the other hand, conveys almost no information to management, but also asks for no political favor from the boss. It is, likewise, a relatively fair deal.

The indecisive mid-level of “P1″ combines the worst of both worlds. It tips off management, enough to get them interested and concerned, but doesn’t implore them to take immediate action and do whatever they can to fix the problem. It’s giving information without demanding (perhaps requesting, but that is different) something in return. It’s good to share information with managers, when you believe they’ll use it to your benefit and especially when you need a favor from one, but it’s rarely wise to take the risk of calling in a manager (or, Xenu forbid, an executive) if neither is the case. To do that is just to add noise to the whole process: it causes anxiety for the manager and chaos for the workers. The manager might not interrupt or change one’s way of working, but it’s a risk, and when you don’t need that manager to interrupt or change other people, why take it?

P0 says, “I’m willing to take your direction and commit to fixing the problem, but you need to trust me when I say I need something.” P2 says “things are under control. You can leave me alone and I’ll keep working on useful stuff.” Both are reasonable statements to take to management. P1 says “this is important enough to be watch-worthy but not that important.” With that, you’re just asking to be micromanaged.

What I am not saying, and what I am

In the above, it might sound like I’m taking an adversarial approach to managers, and that I advocate withholding information. That’s not what I intend to imply. “P0″ and “P2″ are artifacts of formal, written communication in a corporate context. In such a theater, less said is usually better. To put something in writing is to give the other person more power than to say it. To put something in writing in a corporate context is, often, to give power to a large class of people, because that information will be available to many.

If you have a good manager and a strong working relationship, you can verbally convey nuances like “I marked this as P2/Default but I consider it really important” or “here’s what I intend to prioritize and why”. There are good managers as well as bad ones, and it’s typically not the best strategy to withhold all information from them. That said, the more persistent and widely read a piece of communication will be, the more one should tend to hold back. Even if the information is helpful to the people receiving it, distributing it in such a way often shows a lack of care or self-preservation.

The information itself is often less important than the semiotics of furnishing it. To give information without expecting anything in return means something. It does not always lower one’s status or power; in fact, it can raise it. However, to give information formally, and in a context where the person is free not to read it and therefore cannot be expected to act on it, is generally a submissive move. To give information that expects little commitment from the other person but offers one’s own commitment (labeling a task “P1″ is implicitly promising to prioritize it) is quite clearly a subordinate action. And while it is reckless and often harmful (i.e. it will often get you fired) to be insubordinate (as in, disobeying a direct order) to management, it is likewise detrimental to be eagerly subordinate. The best strategy is to take direction and show loyalty, but to present oneself as a social equal. Toward this aim, “I won’t burden you with information unless it’s mutually beneficial for both of us” is the right approach to take to one’s boss. Over-volunteering of information is to lower oneself.

… and, while we’re at it, fuck Scrum.

I didn’t intend to go here with this post, but I’ve written before on the violent transparency— open-plan offices that leave the engineer visible from behind, aggressive visibility into day-to-day fluctuations of individual productivity– that is often inflicted upon people in the software industry. It’s actually absurd– much like the image of an Austrian duke wearing diapers and being paddled by a toothless, redneck dominatrix– that people can be paid six-figure salaries but still have to justify weeks and days of their own working time under the guise of “Agile”. It shows that software engineers are politically inept, over-eager to share information with management without any degree of self-auditing, and probably doomed to hold low status unless this behavior changes. So fuck Scrum and fuck “backlog grooming” and fuck business-driven engineering and fuck “retrospective planning meetings” and fuck any “whip us all harder, master” engineer who advocates for it to be imposed on his own colleagues.

… but back to the main point …

Prioritization is power, and so is information. Management has the final right to set priorities, but relies on from-the-bottom information about how to use that power. It’s probably true that all human organizations converge on oligarchy; democracies see a condensation of power as blocs and coalitions form and people acquire disproportionate influence within and over them, but dictators require information and must promote lieutenants who’ll furnish it. There are many ways to look at this, some optimistic and others dismal, but from a practical standpoint, it’s probably good news for a savvy underling in a notionally dictatorial organization like a corporation. It means that CEOs and bosses, while they have power, don’t have as much as on paper. They still have a need from below, and the most critical need isn’t the work furnished, because work itself is usually pretty easy to commoditize. It’s information that they require.

I am certainly not saying that information should be withheld from management, as if it were some general principle. I’m saying that it shouldn’t be furnished unless there is some tangible benefit in doing so. To furnish information without benefit is often harmful to the person to whom it’s given: it can cause anxiety and overload. To furnish information without personal benefit (although that personal benefit may be slight, even just the satisfaction of having done the right thing) will either confuse a person (and a confused manager often becomes a nightmarish micromanager) or show weakness.

Moreover, there’s a difference between personal, “human to human” communication that is often informal, and communication in the context of a power relationship. Many software engineers miss this difference, and it’s why they’re okay with a regime forcing them to put minutiae of their work lives on “Scrum boards” that everyone can see. If you get along with your boss, it’s OK to give up information in a human-to-human, as-equals way. At lunch, you can discuss ideas and relay which work items you actually think are most important. That’s one way that trust is built. If doing so will make you more trusted and valued, and if it doesn’t hurt anyone to do so; then, by all means, give information to that manager or executive. However, when you’re relating formally, in any persistent medium, the best thing to do is keep communications to “I need X from you because Y”. This applies especially to bug databases, work trackers, and email. Peppering a manager with extraneous information is not only going to identify oneself as a subordinate, but to add anxiety and extend the invitation to micromanage you.

When giving information to a person higher in the corporate ranks, it’s important to do it properly. One must communicate, “Hey man, I’m a player too”. That requires making communication clear, decisive, short and directed to an obvious mutual benefit. (It’s unwise to try to hide one’s own personal benefit; you lose little in disclosing it and, even if your request is denied, you’re more likely to be trusted in the future for doing so.) Communication should also be sparse: clear and loud when needed, nonexistent when unnecessary. One may choose to ring an alarm bell, demand response, and accept that some of the managerially-imposed inconvenience is likely to fall on oneself: that’s the P0/Escalated priority.There is also the option not to ring the alarm bell, but just to perform the work: that’s the P2/Default priority. Taking the middle and giving that bell a half-hearted ring is just silliness.

Sand Hill Road announces Diversity Outreach Program

I’m pleased to announce that I’ve succeeded in coordinating a number of Sand Hill Road’s most prestigious venture capital firms, including First Round Capital, Sequoia, and Kleiner Perkins, to form the first-ever Venture Capital Diversity Outreach Program. I could not have done this alone (obviously) and I want to thank all of my bros (and fembros) in Menlo Park and Palo Alto for making this happen.

In response to negative press surrounding this storied industry and its supposed culture of sexism (which we deny) we held a round-table discussion last weekend on Lake Tahoe, on the topic of rehabilitate our industry’s image. We’re hurt by the perception that we have a sexist, exclusionary, “frat boy” culture, so we decided to form a program to prove that we aren’t sexist. It was easy to agree on the first step: start funding and promoting women.

This idea, though brilliant and revolutionary, raised a difficult question: which women? Based on our back-of-the-envelope calculations, we estimated that there are between 3 and 4 billion women in the world! We had to narrow the pool. One venture capitalist who, unfortunately, declined to be named, said, “we need to fund young women.” Echoing Mark Zuckerberg, he said, “Young people are just smarter.” And so it was agreed that we will be funding 25 women under 23. Each will receive $25,000 worth of seed-round funding in exchange for a mere 15% of the business, along with unlimited one-on-one mentorship opportunities from the Valley’s leading venture capitalists.

We’re extending this opportunity outside of Northern California. In fact, you can apply from anywhere in the world. All pitches must be in video form, each lasting no more than 4 minutes. Which VC you should submit your pitch to depends on where you are applying from. Submissions from Eastern Europe will go to one VC for appraisal, Latin American submissions will go to another, and submissions from Asia to another. We have to match the judges with their specialties, you see. Don’t worry. We’ll have this all sorted out by this evening. Full-body pitch videos only. Face-only submissions will be rejected.

Based on the all-important and objective metric of cultural fit, the submissions will be stack-ranked and the winners will be notified within three days. We recommend that the winners, upon receiving funding, drop out of college to pursue this program. College may be necessary for middle-class people who want to become dentists, and it’s good for propagating Marxist mumbo-jumbo, but it’s so unnecessary if you have all of Sand Hill Road on your side. Which you will, if you’re a woman who wins this contest. Until you’re 28 or so, but that’s forever away and you’ll be a billionaire by then. We find that, in the magical land of California, it’s best not to think about the future or about risk. Future’s so bright, you gotta wear shades!

On a side note, being a venture capitalist is freaking awesome. No, the job doesn’t involve snorting blow off a hooker’s breasts– that actually stopped about 10 years ago, some HR thing. But nothing quite beats the thrill of playing football, in the midst of adiabatic para-skiing, while playing Halo on Google Glass!

Keep the Faith, Don’t Hate, and, above all, Lipra Solof.